added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2021-02-19T06:16:14.835Z
2021-02-17T00:00:00.000
231953800
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-83618-x.pdf", "pdf_hash": "c470a7f9c3b743cabcc0c4d301a6d48970b57182", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1233", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5a4cc8f1108a14681a603be71416da46e69ffb0b", "year": 2021 }
pes2o/s2orc
Feasibility and patency of echoendoscopic anastomoses with lumen apposing metal stents depending on the gastrointestinal segment involved EUS-guided anastomoses with LAMS have emerged as a therapeutic option for patients with obstruction of the digestive tract. However, the long-term permeability of these anastomoses remains unknown. Most of the published cases involve the gastric wall and experience in distal obstruction is limited to few case reports. We review our series of patients treated with LAMS for gastrointestinal obstruction and describe the technical success according to the anastomotic site and the long-term follow-up in those cases in which the stent migrated spontaneously or was removed. Out of 30 cases treated with LAMS, EUS-guided anastomosis did not involve the gastric wall in 6 patients. These procedures were technically more challenging as two failures were recorded (2/6, 33%) while technical success was achieved in 100% of the cases in which the stent was placed through the gastric wall. In two of the patients, one with entero-enteric and another with recto-colic anastomosis, stent removal after spontaneous displacement was followed by long term permeability of the EUS-guided anastomosis (172 and 234 days respectively). In a EUS-guided gastroenterostomy the stent was removed at 118 days, but closure of the fistula was confirmed 26 days later. Our experience suggests that LAMS placement between bowel loops is feasible and might allow the creation of an anastomosis with long-term patency. As compared to LAMS placement between bowel loops, when LAMS are placed through the gastric wall, removal of the LAMS seems to lead to closure of the fistula. Since March 2018, we have performed 30 EUS-guided anastomoses with LAMS. Our experience includes anastomoses with the stomach and more distal anastomosis. Herein we describe the largest series of patients treated with LAMS outside the gastric wall published to date. We also report the long-term follow-up of the fistulous tracts after spontaneous migration or removal of the stent and discuss the possible future applications these findings. Patients and methods We retrospectively reviewed all the cases of EUS-guided anastomoses (EUS-A) with LAMS performed in our center from March 2018 to October 2019, describing the technical success according to the anastomotic site and long-term follow-up in those cases in which the stent was removed or migrated spontaneously. The patients included were consecutive cases in whom, after a multidisciplinary assessment, endoscopic LAMS placement was considered appropriate. Treatment options were discussed on an individual basis. All patients provided written informed consent. All methods were carried out in accordance with relevant guidelines and regulations. For this retrospective case series, we obtained approval from the research ethics committee of the University of Navarre. One experienced therapeutic endoscopist performed all procedures, with or without trainee involvement. The AXIOS-ECTM stent ("Hot Axios", 20-mm diameter; Boston Scientific, Galway, Ireland) was used in all cases. It includes an electrocautery enhanced delivery system which allows puncture and release of the stent in a single-step procedure, thus reducing the number of accessories to be exchanged and potentially decreasing the complication rates 10 . To identify the targeted intestinal loop, the luminal water filling technique 10 was used in most cases, passing a naso-biliary drainage catheter through the stenotic segment with the help of endoscopes of different diameters. When the targeted loop was pre-stenotic and distended, water instillation was not necessary. All cases were performed with a therapeutic linear echoendoscope (GF-UCT180; Olympus, Hamburg, Germany). Freehand direct access and intra-channel release technique was used in all cases. Dilation of the lumen apposing metal stent was not performed in any case. All patients underwent general anesthesia, and endotracheal intubation was only used in patients with peroral approach. Technical success was defined as adequate positioning and deployment of the stent as determined endoscopically and radiographically. Patients remained hospitalized and initiated a liquid diet immediately after or up to 2 days later depending of the type of procedure. The patients were discharged home when they showed adequate tolerance to diet progression. Results Thirty patients were included in this retrospective study, of whom 16 were male (53.3%). Mean age was 67.1 ± 11.54 years (range 34-90). Median follow-up was 164 days (IQR 48-289). The indication for EUS-A was a malignant obstruction in 27 patients, 22 of whom had distant metastases. In 24 patients the EUS-A was the first-choice technique, while in 6 it was a rescue procedure after a failed endoscopic (2 cases) or surgical (4 cases) approach. Twenty-four (80%) LAMS had their proximal flange located in the gastric lumen, whereas in 6 (20%) the gastric wall was not involved. EUS-guided anastomosis without gastric wall involvement are described in Tables 1 and 2. Table 1. Characteristics of patients undergoing EUS-guided anastomosis without involvement of the gastric wall. 1. Afferent loop syndrome. 2. Patient with diffuse gastric cancer. 3. The goal was to create an ileosigmoidostomy. Although the LAMS was released without incident, the punctured target loop turned out to be the jejunum instead of the ileum. Therefore, a jejuno-sigmoidostomy was created. 4. ARR Anterior resection of the rectum. *These are the two patients described in the text, in which the long-term permeability of the anastomosis was confirmed once the LAMS was removed. (Table 3), and all 24 patients proceeded to an oral diet. Of 21 cases with malignant GOO, we had clinical follow-up in 17 patients, and 16 of them died maintaining the oral diet 8-423 days after LAMS placement. Only one patient (described below) required surgical rescue after intentional stent removal, due to closure of the fistula. She is currently alive. 3 patients presented benign GOO, due to acute pancreatitis. Oral intake began the day after the LAMS procedure, with good tolerance. One patient required a second stent 25 days after the first procedure, due to GOO symptoms secondary to buried stent. A second overlapping LAMS was placed, and both were removed 171 days later, when there was evidence that the GOO had resolved; closure of the fistula was endoscopically confirmed 39 days later. The Axios stent has recently been removed in the other two patients after 513 and 625 days respectively and we are waiting for further follow-up. Of 6 patients in whom a lower EUS-A was intended, 2 failures were recorded. The only deployment failure (1/6, 16,7%) corresponded to an ileocolic anastomosis in a patient with stage IV pancreatic adenocarcinoma and a poor bowel preparation (Tables 1, 2, case 6). In addition, due to the great torquing maneuvers needed to reach the obstruction site, the Axios device failed to run fast enough and did not enter the ileal loop that contracted due to the electric shock. There was a perforation that required bowel resection and surgical anastomosis. The second case (Tables 1, 2, case 4), was a failed attempt to create an ileo-sigmoidostomy. Although the LAMS was released without incident, the punctured target loop turned out to be the jejunum instead of the ileum. Therefore, a jejuno-sigmoidostomy was created, and laparoscopic surgical rescue was needed 1 month later. In the other 4 patients (67%) we placed the LAMS successfully and without major complications. These cases include the following anastomoses (Tables 1, 2): colo-rectal (case 1), jejunojejunal (cases 2 and 3) and ileo-ileal (case 5). All Table 2. Follow-up of patients undergoing EUS-guided anastomosis without involvement of the gastric wall. www.nature.com/scientificreports/ patients, except case 1, had progressive cancer disease. Only in one patient (case 4), surgery was required due to recurrence of bowel obstruction secondary to peritoneal carcinomatosis. Mean patency time was 1.5-9 months. and death was related to oncological progression. In two patients with distal EUS-A the stent migrated spontaneously and required removal due to bowel obstruction. The first patient was a 65-year-old woman with a history of advanced ovarian adenocarcinoma who underwent radical pelvic surgery with hysterectomy, bilateral-salpingo-oophorectomy and sigmoid resection, colorectal anastomosis and diverting ileostomy. Complete obstruction of the colorectal anastomosis was observed during endoscopic-follow-up. Afterwards, she presented a complete disconnection of the surgical anastomosis, with a closed rectal stump and a blind colonic stump, and no oncological relapse. Subsequent treatment options were discussed in a multidisciplinary setting. Surgical treatment was dismissed due to a frozen pelvis and the EUS-guided approach was considered the most appropriate. A pediatric colonoscope was advanced through the ileostomy towards the transverse colon. A guidewire was coiled under fluoroscopic guidance into the colon stump. A nasobiliary drainage catheter was advanced over the wire into the colon, and high-pressure water was injected into the blind segment. The fluid-filled colonic lumen was localized transrectally by EUS and punctured with a cautery-enhanced LAMS (AXIOS-ECTM, 20-mm diameter; Boston Scientific, Galway, Ireland). The proximal flange of the stent was deployed into the colon and the distal flange into the rectal stump. The colorectal endoscopic anastomosis with LAMS was completed without complications (Fig. 1A,B). Three-month follow-up endoscopy showed that the stent had migrated distally and confirmed the anastomosis to be large and patent (Fig. 1C). Reversal of the ileostomy was successfully performed. At 15-month follow-up, the patient reported normal bowel movements. The second patient was a 57-year-old man with gastric outlet obstruction due to adenocarcinoma of the anthropyloric region, treated with a palliative surgical gastroenteroanastomosis. Our assessment was requested as the patient complained of repeated vomiting. Gastroscopy showed a complete obstruction of the anthropyloric region. The scope was introduced through the surgical anastomosis in the efferent loop and at 15 cm an infiltrative stenosis was observed. A guide wire was passed deep into the jejunum and, over this wire, an 8.5 F catheter for irrigation with saline and contrast, verifying that the infiltration spread diffusely in the first 30 cm. High-pressure water was infused through the catheter. Subsequently, from the gastric lumen, we looked with the echoendoscope for a distended loop, but all were too far from the gastric wall. Moving towards the stenotic efferent loop through the surgical anastomosis, we identified a small bowel loop, which was distended and close to the prestenotic jejunal wall. A "Hot Axios" LAMS, 20 mm in diameter, was released, creating a lateral-lateral anastomosis between the two intestinal loops (Fig. 2A,B). www.nature.com/scientificreports/ The patient progressed well, maintaining oral feeding. Three months later, it was observed that the prosthesis had migrated, occupying the jejunal lumen. As this displacement hindered the passage of food, the stent was retrieved with a foreign body forceps. After stent removal, a permeable and mature lateral-lateral jejunal-jejunal anastomosis was observed (Fig. 2C), so that no further procedures were performed. The patient maintained oral feeding until his death 6 months later. Given the patency outcome after LAMS migration in these two patients, and after agreeing on the procedure in the Interdisciplinary Committee for Digestive Tumors, we decided to assess the long-term patency of a gastrojejunal anastomosis performed with LAMS after intentional removal of the stent. The patient was a 65-yearold female with Lynch syndrome and GOO due to a duodenal adenocarcinoma; she gave her written informed consent of the off-label LAMS use. She had a history of surgery and chemotherapy for treatment of adenocarcinoma of the right colon and endometrial adenocarcinoma, for which she also received brachytherapy. A gastrojejunal "Hot Axios" was placed. After receiving neoadjuvant chemotherapy for her duodenal carcinoma, the patient was informed of the offlabel procedure and gave her written informed consent. The duodenal tumor was resected, and the LAMS was maintained as a permanent anastomosis (Fig. 3A,B) in the context of a Whipple surgery. The stent was removed 4 months after placement, when the fistulous tract was considered mature. Twenty-six days later, the patient presented with symptoms of GOO and the closure of the fistula was confirmed endoscopically (Fig. 3C). A second LAMS was placed and finally a surgical gastroenteroanastomosis was necessary. Discussion Lumen-apposing metal stents (LAMS) were developed for access to and drainage of pancreatic fluid collections 8,9 . However, LAMS have been increasingly used for other conditions 1 , such as gallbladder drainage, intestinal obstruction, abscess drainage, treatment of afferent loop syndrome, and refractory gastrointestinal strictures 11 . LAMS are fully covered, short and have a wide flange at each end. The short length and wide flanges reduce the risk of migration and improve patient tolerance. Permanent LAMS placement is not advised in LAMS applications such as pancreatic fluid collection drainage because of the risk of bleeding and stent becoming buried 12 . Although there is no consensus on whether and when stent removal should be performed, a 4-week interval has been proposed 13 . Little is known about how much time should pass before a LAMS is removed in specific indications such as gastrojejunostomy (EUS-GE), as most studies have focused on short-term outcomes immediately after stent placement 14 . When used for benign GOO, authors recommend removing the stent as soon as there is evidence that the gastric outlet obstruction (GOO) has resolved, via cross-sectional imaging, upper GI series, or endoscopy 4 . In our series, only 3 patients were treated The authors do not describe the evolution of the gastroenteric fistula after removal of the LAMS, but it is expected that, once the primary cause of GOO resolves, LAMS removal is followed by progressive closure of the EUS-GE without leaving sequelae. Spontaneous closure of large transmural gastric defects after removal of a AXIOS Stent has been reported 15 . In patients with EUS-directed transgastric ERCP after gastric bypass, follow-up data on fistula closure have been recently reported 16 . All the cases involved the gastric wall, and the authors stated that persistent fistula was uncommon (observed only in 1 of 11 cases). However, these patients presented surgery-altered gastric anatomy (bypass procedure), so these observations may not be applicable to patients with an intact stomach. In our third patient, the gastrojejunal AXIOS stent was retrieved 4 months after deployment, and the anastomosis became completely closed in a few weeks. We believe that LAMS can be deployed in any part of the gastrointestinal tract, provided that the two walls are close enough and there are no interposed vessels. In addition, the anastomosis site must be accessible to the echoendoscope. Therefore, EUS-A with LAMS between intestinal loops is a procedure reserved for selected patients. In our series, EUS-A did not involve the gastric wall in 6 patients. Data in the literature are scarce and limited to two single cases 8,9 and afferent loop syndrome 17 . Our experience shows that EUS-A is technically more challenging in these cases, especially if it is necessary to introduce the echoendoscope deeply. There are several problems that make the placement of the LAMS in the distal tract more complex: (1) For transgastric LAMS, the anatomy is a constant since the angle of Treitz is adjacent to the stomach, while in distal anastomoses the anatomy is variable, and this can compromise the passage of the irrigation catheter, the stability of the endoscope and the proximity of the ends to be joined. (2) The method of transgastric anastomosis is quite standardized. However, for distal anastomoses, almost all patients are different, depending on the intestinal segment involved and previous treatments, and technical variations are often necessary. Our main technical failure corresponded to an ileocolic anastomosis. An additional challenge when placing distal (non-gastric) AXIOS stents is the fact that the small bowel is a mobile intraperitoneal organ and can move away during the procedure. Finding the target loop in the small bowel can be difficult, as happened in another of our patients, in whom we placed the LAMS in a jejunal loop instead of placing it into the ileal lumen. Follow up from cases 3, 4, and 5 is limited as patients were deceased within 2 months after LAMS placement. There may be concerns about offering this procedure to patients with a poor prognosis of a few months, but we consider that in these cases, therapeutic EUS offered a palliative approach that provided a better quality of life, allowing oral feeding until death. Out of our six cases with non-gastric EUS-A, two LAMSs required removal because of migration into the bowel lumen, 117 and 198 days after deployment, respectively. The spontaneously displaced stents were removed endoscopically, and patency of the fistula previously created with LAMS was observed. In the subsequent followup, both patients progressed well, and no closure of the anastomosis was observed after 172 days in one case www.nature.com/scientificreports/ (until the death of the patient) and after 234 days in the other who currently remains asymptomatic. If the longterm patency of EUS-guided anastomosis after the stent retrieval is confirmed in larger series, it can become a minimally invasive approach in complex patients with very few therapeutic alternatives. We did not achieve long-term permeability of a gastrojejunal anastomosis performed with LAMS after intentional stent removal (4 months after placement). We think that the cause of this different progression once the stent is retrieved or spontaneously migrated, is likely to be related to the variable structure of the muscularis propria layer along the digestive tract: in the small intestine, it includes an internal circular layer and an external longitudinal layer; in the colon the longitudinal layer is grouped into three bands called taenia coli, while in the stomach there is a third oblique layer, which can favor the closure of fistulas. Another factor may be that gastric motility 18 produces a hypertrophic response on the gastric side, with the ability to ingrown foreign bodies, which is not so clearly observed in other segments of the digestive tract. If the gastric wall is involved, it is possible that the use of larger caliber stents 19 and their later removal could allow the long-term permeability of the anastomosis. However, there are currently no data in the medical literature that support this theory. Theoretically, LAMS allows adhesion between two organs as is done with a surgical anastomosis 10 . We did not verify if, in those cases in which the patency of the anastomoses was maintained after stent removal, there is a fusion of the different layers of the intestinal wall as was shown in animal models 20 , since one of the patients is currently alive (without clinical obstruction after 4 months) and we have no autopsy data from the deceased patient. In conclusion, our observations suggest an increased role for EUS-A in the management of patients not only with GOO but also with other different disorders, such us small bowel obstruction (e.g., anastomotic strictures) or non-stentable obstructive neoplasia of the right colon. Distal stenting is more challenging compared to LAMS placement involving the gastric wall, but feasible in some selected cases. Moreover, when the gastric wall is not involved in EUS-A, long-term permeability of the newly created anastomosis may be expected after stent retrieval or spontaneously migrated. In these cases, EUS-A offers the potential benefits of a permanent surgical bypass while maintaining a minimally invasive approach. In the future, well-designed RCTs and prospective studies are needed to further validate these findings.
v3-fos-license
2023-05-05T13:10:17.336Z
2023-05-18T00:00:00.000
258487261
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-1729/13/10/2069/pdf?version=1697513669", "pdf_hash": "4f1efd82f523976632752297eb44f6f34352fe9f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1236", "s2fieldsofstudy": [ "Biology" ], "sha1": "37cf89aafe5c5e61fc94f9c47de48649e96ca654", "year": 2023 }
pes2o/s2orc
The Genetic Code Assembles via Division and Fusion, Basic Cellular Events Standard Genetic Code (SGC) evolution is quantitatively modeled in up to 2000 independent coding ‘environments’. Environments host multiple codes that may fuse or divide, with division yielding identical descendants. Code division may be selected—sophisticated gene products could be required for an orderly separation that preserves the coding. Several unforeseen results emerge: more rapid evolution requires unselective code division rather than its selective form. Combining selective and unselective code division, with/without code fusion, with/without independent environmental coding tables, and with/without wobble defines 25 = 32 possible pathways for SGC evolution. These 32 possible histories are compared, specifically, for evolutionary speed and code accuracy. Pathways differ greatly, for example, by ≈300-fold in time to evolve SGC-like codes. Eight of thirty-two pathways employing code division evolve quickly. Four of these eight that combine fusion and division also unite speed and accuracy. The two most precise, swiftest paths; thus the most likely routes to the SGC are similar, differing only in fusion with independent environmental codes. Code division instead of fusion with unrelated codes implies that exterior codes can be dispensable. Instead, a single ancestral code that divides and fuses can initiate fully encoded peptide biosynthesis. Division and fusion create a ‘crescendo of competent coding’, facilitating the search for the SGC and also assisting the advent of otherwise uniformly disfavored wobble coding. Code fusion can unite multiple codon assignment mechanisms. However, via code division and fusion, an SGC can emerge from a single primary origin via familiar cellular events. Introduction The problem.Automated total searches of ≈2.5 × 10 5 bacterial and archaeal genomes find [1,2] only slightly altered genetic codes, related to the Standard Genetic Code (SGC).Hence, true alternative codes are exceedingly rare on modern Earth.Modern biota, therefore, are convincingly traced to a single ancestral group encoding peptides using a close SGC relative.This ancient, all-inclusive ancestor presents a problem of ultimate significance for biology and is the topic here. Early coding.The code's origin presents an inevitable succession problem: one seemingly cannot evolve proteins, like aminoacyl-RNA synthetases (AARS) without prior competent coding/protein synthesis.AARS are complex amino acid polymers binding several substrates, performing stereo-and regiospecific chemistry.Thus, one must evolve complex, accurate peptide synthesis using precursors of modern protein AARS, presumably precursors composed of RNA.Thus, RNA world evolution for AARS is implied.This work characterizes this early period before the appearance of the protein AARS and their later complex enzymatic evolution. Incorporating code division.We follow the evolution of numerous RNA-based coding tables through time in code evolution environments, using Monte Carlo kinetics [3], see Section 4. The time for one round of evolution in a code environment is called a passage.An evolving environment undergoing a passage containing zero, one, tens, or hundreds the effects of the mechanism can be read from periodicity in the plot.Multiple mechanistic effects are evident in an ordinary two-dimensional figure.Structured plotting is illustrated (Figure 1A) by three groups of codes that have a different threshold for code division (cc, the completeness criterion) at 1, 10, and 18 codon assignments.Within each such group, codes have an unselective probability of division per passage of 0.125, 0.25, and 0.5.Thus, one can read the effects of increasing division within groups of three and also read the effects of different division thresholds by comparing such triple groups. Effects of code division.In Figure 1A, increased division (Pdiv) always reduces the time to evolve SGC-like coding; that is, ≥20 assigned functions.Quicker evolution is slightly less for the same Pdiv change at a higher threshold (comparing mean slopes of threes).In addition, evolution is increasingly rapid if the division threshold is lowered from near-completion (set at ≥20 functions encoded) to no threshold at all on the left (threshold at one function; any code can divide).Figure 1A, therefore, presents a non-trivial result: non-selective code division (mechanism #3, red square), acting throughout evolution, evolves SGC-like coding the fastest. Division and rate of evolution.Division is revisited in Figure 1B, plotting the speed of evolution versus the number of code divisions to reach an SGC-like assignment.In Figure 1B, the time to ≥20 encoded functions with Figure 1A's variety of division probabilities and thresholds declines rapidly as code division increases.The fastest mean SGC-like evolution with code division, 92 passages, is more than three times faster than previous environments with similar code passage probabilities [10,13]. Small effects.A mechanism-structured plot is also useful when substantial effects are absent.Figure 1A plots the number (≈38) of random initial codon assignments required to reach ≥20 different assigned functions on its rightward axis.This is hardly altered in mechanisms one through nine.Close inspection of displacements from the least-squares dashed line discloses periodic behavior; fast evolution requires slightly fewer assignments.However, the structured plot (Figure 1A) highlights how small this effect is for these changes in code division.Fixed assignments are not a rule; pathways can use assignments inefficiently.Still, even conditional assignment constancy will be useful below in clarifying complex evolution. Presenting code accuracy.A general measure of SGC similarity is frequently useful.One would like to avoid assigning an SGC-like number of codons but to different functions than in the actual SGC. In this work, misassignments (abbreviated "mis") with respect to the biological SGC are counted.Codes with no difference from SGC assignment are denoted mis0, those with one difference are mis1 codes, and on to mis2, mis3. . .The fraction of SGC-like assignments provides an index of distance that meets our need to measure evolutionary accuracy. However, this pose a problem of precision: SGC-identical, mis0 codes can be infrequent, even pragmatically unmeasurable for inaccurate evolutionary modes.However, Figure 2 shows how this problem can be met.The distribution of errors is smooth and unimodal-the fraction of SGC-like codes (here, the fraction that is mis0 and mis1) rises smoothly with the decrease in mean mis in near-complete codes.Because mean mis are measured in up to two thousand environments, average misassignment is usually known with precision.Proximity to the SGC is therefore measured (Figure 2) either by calculating mean misassignment (mis; accuracy better when smaller) among most complete codes or by counting codes nearest the SGC when accessible (mis0, mis1-accuracy better when larger). Mechanisms and code accuracy.Accuracy as mean misassignments in Figure 3, like time in Figure 1A, is plotted versus Figure 1A's division pathways one to nine.A Figure 1Alike pattern reappears.Therefore, code accuracy is greatest with more division (greater Pdiv) in several contexts.The sensitivity of code accuracy to division frequency declines significantly as a division threshold increases (Figure 3).Absolute accuracy is also greatest when the threshold (completeness criterion, cc, square marker) is low: pathway #3 is the most accurate (Figure 3).Most accurate code evolution utilizes frequent division and approaches the SGC quickly without selection for coding sophistication; any code at all meets a one-assignment division "threshold".This result reappears in a much more complex mechanistic context below.Mechanisms and code accuracy.Accuracy as mean misassignments in Figure 3, like time in Figure 1A, is plotted versus Figure 1A's division pathways one to nine.A Figure 1A-like pattern reappears.Therefore, code accuracy is greatest with more division (greater Pdiv) in several contexts.The sensitivity of code accuracy to division frequency declines significantly as a division threshold increases (Figure 3).Absolute accuracy is also greatest when the threshold (completeness criterion, cc, square marker) is low: pathway #3 is the most accurate (Figure 3).Most accurate code evolution utilizes frequent division and approaches the SGC quickly without selection for coding sophistication; any code at all meets a one-assignment division "threshold".This result reappears in a much more complex mechanistic context below.Moreover, in Figure 3, code division has an interesting property previously shown for code fusion [10]: more division (greater Pdiv) reduces error, implying constraint of the present mixture of initial SGC and random assignments.Such adherence to an underlying coding consensus (Figure 3) is weakened if a threshold delays the initiation of code division.How- Moreover, in Figure 3, code division has an interesting property previously shown for code fusion [10]: more division (greater Pdiv) reduces error, implying constraint of the present mixture of initial SGC and random assignments.Such adherence to an underlying coding consensus (Figure 3) is weakened if a threshold delays the initiation of code division.However, more code division, not division selecting code progress, produces an accurate code (Figure 3) while also evolving it quickly (Figure 1A). Five-dimensional comparison of 32 pathways.Incorporation of division effects into a Monte Carlo kinetic scheme (Methods) for specific code table evolution defines 32 pathways toward the genetic code: with/without code division (probability of division, as well as division threshold), with/without code fusion [10], with/without independent coding tables [10], and with/without simplified Crick wobble [3].The 32 pathways are quantitatively compared in Figure 4, using the structured display method of Figure 1A to organize five-dimensional data (see the Supplementary File). Figure 4 presents the time to reach ≥20 encoded functions (in passages, ordinate) versus all 32 numbered mechanisms on the x-axis.For example, minimal time to evolve ≥20 encoded functions occur via mechanism #20, which (reading titles above and the vertical line through the point: legend, Figure 4) utilizes no completeness threshold for division (nocc), incorporates probable code division (div), allows codes to fuse (fus) but has no independent codes forming in its environment (notab), and evolves during initial assignments in the absence of wobble (nowob vertical line).Path #20 reappears frequently below. A glance identifies the fastest evolution.In Figure 4, mechanisms that have no completeness threshold (nocc: cc = 1) and probable code division (div) form a "canyon" (mech Figure 4 presents the time to reach ≥20 encoded functions (in passages, ordinate) versus all 32 numbered mechanisms on the x-axis.For example, minimal time to evolve ≥20 encoded functions occur via mechanism #20, which (reading titles above and the vertical line through the point: legend, Figure 4) utilizes no completeness threshold for division (nocc), incorporates probable code division (div), allows codes to fuse (fus) but has no independent codes forming in its environment (notab), and evolves during initial assignments in the absence of wobble (nowob vertical line).Path #20 reappears frequently below. A glance identifies the fastest evolution.In Figure 4, mechanisms that have no completeness threshold (nocc: cc = 1) and probable code division (div) form a "canyon" (mech #17-24; shaded bar), each of whose eight pathways evolve ≥20 encoded functions faster than any of the other 24 paths examined. Moreover, this nocc div canyon is the major difference between Figure 4 left and right.Superior unselective code division, first seen in Figure 1A, reappears here in a broader mechanistic context.Therefore, the path of least selection [14], that is, the probable evolutionary path, will be a nocc div unselective route.Accordingly, code division greatly changes early code evolution, and nocc and div will be necessary elements in the best SGC pathways. A glance identifies the slowest evolution.In Figure 4, the four slowest routes to the SGC have in common that codes do not divide (nodiv), and no additional codes appear alongside independent code origins (notab).Under such conditions, fusion is irrelevant because there are no additional codes to fuse.Thus, for these four slowest pathways, fus/nofus mechanisms are about equivalently poor because code fusion is inaccessible and irrelevant.A single code in each environment must evolve alone to SGC proximity, and this requires a complex set of events, with many digressions, making these the most improbable evolutionary routes.This matches previous observations [10] and rationalizes the superior pathways considered below, all of which exploit code-code interactions. Wobble is always inhibitory.Among 32 pathways in Figure 4, 16 encode using wobble, and 16 do not.One can consider the 16 wob (no vertical line)/nowob (line) pairs together by noting that each wobbling pathway (no vertical line) is accompanied by a non-wobbling pathway immediately to its right (line) that differs only in lacking a simple Crick wobble [3]. Mechanisms differ in their sensitivity to wobble.Slow single-coding-table environments are very much impaired if they must use wobble assignments.In contrast, the eight mechanisms of the nocc div canyon (Figure 4) are less sensitive to inhibitory wobble effects.However, throughout all 16 wob/nowob pairs in Figure 4, wobble prolongs evolution to the SGC in 16 varied mechanistic contexts.This extends previous findings that assignments that commit more triplets always impede progress toward complete coding [13,15] and that wobble specifically disrupts the evolution of codes that most resemble the SGC [12].Figure 4 s kinetics strongly reinforce previous structural arguments; accurate wobble requires a complex ribosomal isomerization [10,16] and a complex functional tRNA structure [17,18]-thus wobble encoding probably appeared late in RNA code evolution, after most functions. The addition of simple Crick wobble to present codes adds, minimally, two misassignments because unique SGC encodings, AUG/Met and UGG/Trp, are not accounted for here.Unique assignments are most simply explained as survivors from the early nonwobbling era defined just above.However, an essential code transition from unique to wobbling assignments can definitely bear more thought. Independently originating codes (tab) speed SGC evolution, but not in the nocc div canyon.The effect of multiple independent codes arising side-by-side, then interacting within an SGC-evolving environment, can also be assessed in Figure 4. Pairs of tab/notab mechanisms, in which the only change is the absence of independently evolving coding tables, have sequential odd or even numbers. For example, mechanisms #10 ⇔ #12 and #18 ⇔ #20 differ only in lacking parallel environmental codes in the higher-numbered mechanisms.However, the two code pairs differ greatly in the resulting effect.Loss of other codes slows SGC evolution significantly on the left in Figure 4 (#10 to 12; 447 to 1345 passages), where nodiv cuts off other codes arising by division.In contrast, on the right (#18 to 20; 133 to 121 passages), with a supply of alternative fusion partners available from code division, parallel independent codes are instead slightly inhibitory to evolutionary progress.Similarly, for all codes in 12 tab/notab pairs outside the canyon and each of 4 such pairs within the #17 to 24 canyon, codes arising by code division are always more favorable partners than independent coding tables.This gathering of coding information from several into one nascent code returns in the discussion section. Speed and accuracy are related.Given that genetic codes can adhere to underlying consensus assignments [10], the existence of such adherence (as in Figure 3), as well as evolutionary speed (as in Figure 4), is of importance.For the highly varied 32 possible mechanisms, as for the smaller, more uniform group of code divisions (Figure 2), the fraction of codes near the SGC increases as the mean number of misassignments declines.That is, the distribution of error regularly sharpens as the mean misassignment in ≥20 function codes declines, drawing in toward an SGC consensus.In Figure 5, paralleling Figure 2 for division variation only, mean misassignment (mis) is a useful measure of SGC proximity, represented as the sum of mis0 and mis1 code fractions.In fact, SGC-like codes increase more rapidly as mean misassignment closes in on the SGC, yielding a very sensitive index of SGC proximity (Figure 5). of error regularly sharpens as the mean misassignment in ≥20 function codes declines, draw ing in toward an SGC consensus.In Figure 5, paralleling Figure 2 for division variation only mean misassignment (mis) is a useful measure of SGC proximity, represented as the sum o mis0 and mis1 code fractions.In fact, SGC-like codes increase more rapidly as mean misas signment closes in on the SGC, yielding a very sensitive index of SGC proximity (Figure 5).In Figure 6, mis is plotted for the complete structured set of 32 pathways.A comparison of mechanism-structured plots in Figures 4 and 6 shows that evolutionary speed and accuracy are related; the two plots are similar over most pathways.For example, there is again a mech anism #17-24 nocc-div canyon, within which the lowest global code error appears.However small differences in speed and accuracy from independent tables are observed (e.g., pathway #9).In Figure 6, mis is plotted for the complete structured set of 32 pathways.A comparison of mechanism-structured plots in Figures 4 and 6 shows that evolutionary speed and accuracy are related; the two plots are similar over most pathways.For example, there is again a mechanism #17-24 nocc-div canyon, within which the lowest global code error appears.However, small differences in speed and accuracy from independent tables are observed (e.g., pathway #9). Figure 7 makes explicit this interaction between speed and accuracy by plotting evolve ≥20 encoded functions in passages vs. resulting mis.There is a clear relation, with some variation: the least squares line accounts for 86% of the variance in misassig Therefore, fast evolution tends to occur using pathways that also approach SGC con Figures 1A, 3, 4, and 6 convey a decisive property of code evolution: it is not neces choose between rapid code evolution and code adherence.There are quick routes to that are also SGC-like.Figure 7 makes explicit this interaction between speed and accuracy by plotting time to evolve ≥20 encoded functions in passages vs. resulting mis.There is a clear relation, though with some variation: the least squares line accounts for 86% of the variance in misassignment.Therefore, fast evolution tends to occur using pathways that also approach SGC consensus.Figure 1A, Figure 3, Figure 4, and Figure 6 convey a decisive property of code evolution: it is not necessary to choose between rapid code evolution and code adherence.There are quick routes to codes that are also SGC-like.More quantitatively, mechanisms #18 and 20 most quickly present near-complete codes (Figure 4).These pathways have low levels of misassignment: more than a quarter of all ≥20 function codes are 0, 1, or 2 assignments from the SGC.In fact, codes identical to the SGC (mis0) are more than 1 in 40 of these near-complete coding tables.Nocc-div canyon codes again provide the least selection, that is, an evolutionary route favored because it requires the least selected alteration to become the SGC [14]. Distinguishing canyon codes.To focus discussion, mechanisms #18 and 20 are put foremost because they most rapidly produce complete coding (Figure 4).As Figure 6 shows, they do not precisely correspond to maximal resemblance to the SGC; canyon pathways 22 and 24 have slightly greater mean SGC similarity. Differences between canyon codes appear small but are significant.Between ≥20 functions in mechanisms #20 and #18, 11.2 passages intervene.Given their standard errors in 1000 environments each, a two-tailed, unequal variance t-test yields 1.8 × 10 −15 as the probability that these mean times are the same.Thus, the time profiles in Figure 4 What code differences are significant?Are Figure 4′s time differences, however statistically significant, of importance to evolution?This question can be approached quantitatively using the notion of least selection [14].Figure 8 combines code completeness and accuracy in one metric.The abundance of codes that both encode ≥20 functions (completeness) and are accurate (fewest differences from SGC assignments) is taken as the distance to be crossed by selection.This is most relevant at early times when such codes are first exposed to selection.In Figure 8, the mean time to encode ≥20 functions for mechanism #20, 121 passages (Figure More quantitatively, mechanisms #18 and 20 most quickly present near-complete codes (Figure 4).These pathways have low levels of misassignment: more than a quarter of all ≥20 function codes are 0, 1, or 2 assignments from the SGC.In fact, codes identical to the SGC (mis0) are more than 1 in 40 of these near-complete coding tables.Nocc-div canyon codes again provide the least selection, that is, an evolutionary route favored because it requires the least selected alteration to become the SGC [14]. Distinguishing canyon codes.To focus discussion, mechanisms #18 and 20 are put foremost because they most rapidly produce complete coding (Figure 4).As Figure 6 shows, they do not precisely correspond to maximal resemblance to the SGC; canyon pathways 22 and 24 have slightly greater mean SGC similarity. Differences between canyon codes appear small but are significant.Between ≥20 functions in mechanisms #20 and #18, 11.2 passages intervene.Given their standard errors in 1000 environments each, a two-tailed, unequal variance t-test yields 1.8 × 10 −15 as the probability that these mean times are the same.Thus, the time profiles in Figure 4 convey statistically valid differences.Pathway #20 really arrives at ≥20 encoded functions before #18.However, this significance leaves open an essential question. What code differences are significant?Are Figure 4 s time differences, however statistically significant, of importance to evolution?This question can be approached quantitatively using the notion of least selection [14].Figure 8 combines code completeness and accuracy in one metric.The abundance of codes that both encode ≥20 functions (completeness) and are accurate (fewest differences from SGC assignments) is taken as the distance to be crossed by selection.This is most relevant at early times when such codes are first exposed to selection.In Figure 8, the mean time to encode ≥20 functions for mechanism #20, 121 passages (Figure 4), is taken as a reference.Figure 8 plots SGC proximity for all eight canyon-bottom mechanisms at that early time, using the same structured list as Figures 4 and 6.Relevant pathway abbreviations again appear above each datum. 4), is taken as a reference.Figure 8 plots SGC proximity for all eight canyon-bottom mechanisms at that early time, using the same structured list as Figures 4 and 6.Relevant pathway abbreviations again appear above each datum. Least selection resolves a fusion effect on accuracy. Figure 8′s refined distance index resolves canyon pathways. There is a rift in the canyon floor: leftward pathways in Figure 8 are much closer to the SGC than rightward.Consulting topward abbreviations, fusing pathways (#17-20) are much closer to the SGC than non-fusing ones (#21-24).Such evolutions may also employ independent tables or not (tab/notab) and/or may use wobble assignments or not (wob/nowob), but fusing routes remain always closer to the SGC.This resembles prior conclusions [10,13] that identified code fusions as decisive for the rapid appearance of SGC-like codes. A canyon mechanism worth noting is #24, which relies on non-selective code division alone, nocc div nofus notab nowob.It is significantly slower than #18 and #20 to complete codes (Figure 4) but has a very good overall error (Figure 6) and is deficient only in total SGC Least selection resolves a fusion effect on accuracy.Figure 8 s refined distance index resolves canyon pathways.There is a rift in the canyon floor: leftward pathways in Figure 8 are much closer to the SGC than rightward.Consulting topward abbreviations, fusing pathways (#17-20) are much closer to the SGC than non-fusing ones (#21-24).Such evolutions may also employ independent tables or not (tab/notab) and/or may use wobble assignments or not (wob/nowob), but fusing routes remain always closer to the SGC.This resembles prior conclusions [10,13] that identified code fusions as decisive for the rapid appearance of SGC-like codes. A canyon mechanism worth noting is #24, which relies on non-selective code division alone, nocc div nofus notab nowob.It is significantly slower than #18 and #20 to complete codes (Figure 4) but has a very good overall error (Figure 6) and is deficient only in total SGC proximity (Figure 8).Code division, even acting alone in pathway #24, suffices for moderately rapid code evolution. Further, Figure 8 again favors the exclusion of wobble during assignment [3]; as in Figures 4 and 6) and also favors the absence of parallel independent codes (notab) in the two mechanistic environments where it can be compared with a similar path (#17 vs. #19 and also #18 vs. #20). Four most competent pathways.Thus, favored paths to the SGC are defined: via the leftward canyon, nocc div fus.Moreover, the most favored pathway is resolved.That is path #20, nocc div fus notab nowob. But, given that choice, tab/notab and wob/nowob options are similar (differing by <<2-fold).At the early times of Figure 8, for example, near-complete codes identical to the SGC using the second-best pathway #18 are 77% the abundance of similar codes via pathway #20.Thus, as a potential SGC pathway, both #18 and #20 must be considered. Code division and fusion collaborate, but independently.It is no surprise that among the most SGC-like codes here, code fusion is frequent.Probabilities were chosen to make fusion effective.However, a new question arises from the introduction of code division.Are division and fusion related or independent features of code evolution?Though no div fus interaction was consciously implemented, human intuition is untrustworthy when so many processes interact. Figure 9 plots the product of the fraction of best codes fusing with the fraction dividing for the 8/32 pathways that use both div and fus and the 24 that do not (plotted at zero).This is compared to the observation: the fraction of best codes employing both fusion and division are counted among results.Figure 9 shows that the product of fraction fus and div and observed conjoined fusdiv in results are virtually identical.Therefore, fusion and newly introduced division aid code evolution, but by acting independently. rapid code evolution. Further, Figure 8 again favors the exclusion of wobble during assignment [3]; as in Figures 4 and 6) and also favors the absence of parallel independent codes (notab) in the two mechanistic environments where it can be compared with a similar path (#17 vs. #19 and also #18 vs. #20). Four most competent pathways.Thus, favored paths to the SGC are defined: via the leftward canyon, nocc div fus.Moreover, the most favored pathway is resolved.That is path #20, nocc div fus notab nowob. But, given that choice, tab/notab and wob/nowob options are similar (differing by <<2fold).At the early times of Figure 8, for example, near-complete codes identical to the SGC using the second-best pathway #18 are 77% the abundance of similar codes via pathway #20.Thus, as a potential SGC pathway, both #18 and #20 must be considered. Code division and fusion collaborate, but independently.It is no surprise that among the most SGC-like codes here, code fusion is frequent.Probabilities were chosen to make fusion effective.However, a new question arises from the introduction of code division.Are division and fusion related or independent features of code evolution?Though no div fus interaction was consciously implemented, human intuition is untrustworthy when so many processes interact. Figure 9 plots the product of the fraction of best codes fusing with the fraction dividing for the 8/32 pathways that use both div and fus and the 24 that do not (plotted at zero).This is compared to the observation: the fraction of best codes employing both fusion and division are counted among results.Figure 9 shows that the product of fraction fus and div and observed conjoined fusdiv in results are virtually identical.Therefore, fusion and newly introduced division aid code evolution, but by acting independently.A second, more efficient crescendo.Figure 10 shows early kinetics for the reproducibly superior #20 pathway.In particular, it shows two code species closest to the SGC (≥20 encoded functions with mis0 or mis1).There is a rapid rise after fusion becomes significant, then a prolonged presence of ≥20 assignment codes, zero or one assignment from the SGC.This accurate era lasts hundreds of passages.Thus, there will be many code assignments, decays, captures, fusions, and divisions during this period.Said an-other way, proficient (Figure 10) codes vary across time, but continuously present novel near-SGC-relatives for selection. A second, more efficient crescendo.Figure 10 shows early kinetics for the reproducibly superior #20 pathway.In particular, it shows two code species closest to the SGC (≥20 encoded functions with mis0 or mis1).There is a rapid rise after fusion becomes significant, then a prolonged presence of ≥20 assignment codes, zero or one assignment from the SGC.This accurate era lasts hundreds of passages.Thus, there will be many code assignments, decays, captures, fusions, and divisions during this period.Said another way, proficient (Figure 10) codes vary across time, but continuously present novel near-SGC-relatives for selection.Moreover, in Figure 10, at the top is the fraction of ≥20 function codes that have assignments from both code fusion and division.Best SGC candidates arise nearly entirely by code division and fusion combined (dashed line, Figure 10).This parallels the succession of highly competent codes from fus alone [10], but this fus div crescendo arises more quickly and yields more frequent SGC-like codes.At the 150-passage peak, there are 1 in 340 live ≥20 function mis0 codes (1 in 1000 total codes, including unsuccessful fusions: Figure 10), or 1 in 74 live ≥20 function mis1 codes (1/210 of total codes: Figure 10).A selection would seem to easily find these relatively frequent SGC-like codes.Therefore, fusion with division is a more probable route to the SGC than fusion alone [13].Moreover, in Figure 10, at the top is the fraction of ≥20 function codes that have assignments from both code fusion and division.Best SGC candidates arise nearly entirely by code division and fusion combined (dashed line, Figure 10).This parallels the succession of highly competent codes from fus alone [10], but this fus div crescendo arises more quickly and yields more frequent SGC-like codes.At the 150-passage peak, there are 1 in 340 live ≥20 function mis0 codes (1 in 1000 total codes, including unsuccessful fusions: Figure 10), or 1 in 74 live ≥20 function mis1 codes (1/210 of total codes: Figure 10).A selection would seem to easily find these relatively frequent SGC-like codes.Therefore, fusion with division is a more probable route to the SGC than fusion alone [13]. Discussion whose winners had uniquely efficient protein biosynthesis and carried their genetic code to predominance.Such an era has already been persuasively modeled [19]. Though late genetic-code-based radiation is still probable, results here concern an earlier RNA era, before protein AARS-code division profoundly alters early code history.Division speeds SGC evolution itself (Figure 1A,B).The fastest evolution occurs for unselective division when any code can divide (Figures 1A, 3 and 5).Under these conditions, the fastest approach to the SGC yet seen is observed (compare [10]).Moreover, code division reinforces the majority of SGC assignments: when a mixture of SGC and random assignments is supplied, division tends to SGC rather than random assignments (Figure 3).Such preservation increases if division is more likely (increased Pdiv, Figure 1B), as well as with more time to divide (cc = 1: Figure 1A). Evolution in parallel.It was initially thought that code fusion would be advantageous because it allows parallel progress toward the SGC, gathering changes made in different coding compartments instead of waiting for all modifications in a single ancestral line [3].This can be quantitated (Figures 4 and 8; [10]).Here, in part because of improved fusion, dividing SGC evolution is ≈3-fold accelerated over non-dividing codes fusing with independent genetic codes (Figures 4, 8 and 9). Thirty-two possible pathways to the SGC: rates of evolution.With the addition of options for a code division threshold (cc/nocc) and division frequency (Pdiv ≥ 0) to previous models, there are 2 5 = 32 types of pathways for code evolution.In this work, a code evolves entirely without one of these five effects or, in contrast, with a probability known to alter coding outcomes (see the Supplementary Data File). Plotting evolutionary results against a structured list of pathways (Figures 1A, 3, 4, 6 and 11), defined at plot top, multiple different evolutionary pathways can be compared.This is first used for differing division rates and differing thresholds (Figures 1A and 3) and then extended to all 32 pathways (Figures 4 and 6), emphasizing the rate of approach to the complete set of SGC assignments (Figure 4), the adherence of the resulting codes to SGC encoding (Figures 3, 6 and 8) and the role of code division (Figure 11). The rate of evolution shows a notable canyon of fast evolution (Figure 4) for eight mechanisms (#17-24) that allow code division (div) and impose no threshold for division (nocc).Conspicuously, all eight canyon mechanisms encode ≥20 functions more quickly than any of the other 24 possible pathways. Thirty-two possible pathways to the SGC: accuracy of evolution.There is a general relation between speed and accuracy of code evolution: this is shown in Figure 7, where the times for evolution to ≥20 functions are shown versus accompanying misassignment for 32 pathways.The observation that much of the variance for accuracy can be explained by the rate of evolution is welcome.Figure 7 implies one can find quick evolution accompanied by accurate assignment, so starting from a mixture of initial encodings becomes plausible. This promise is fulfilled in Figure 4 for rates and Figure 6 for accuracies.These profiles have similar shapes: time to ≥20 functions and misassignments track well for most of the 32 very different mechanisms (Figure 7).Most particularly, a #17-24 canyon with quick evolution and accurate assignments exists in Figures 4 and 6. A rift in the canyon floor: the role of fus.Codes requiring the least selection [14] to become the SGC are likely precursors to the historical code.Thus, further resolution comes from a more precise measure of distance to the SGC, incorporating both speed and accuracy.In Figure 8, such an index is implemented for the eight canyon codes (Figures 4 and 7), using as distance metric the fraction of codes that encode ≥20 functions and are simultaneously accurate: mis0, mis1, or their sum. Reading the upper legend of Figure 8, there is a large difference in codes that fuse (fus) and those that do not (nofus).Maximally complete codes via fusion (#17-20) are about an order of magnitude more abundant than via non-fusing pathways (#21-24).This parallels previous findings [10,12] that most complete codes come from code fusion.A nocc div fus canyon subset (Figure 8) of pathways implements the doubly capable code evolution implied by the speed-accuracy correlation (Figure 7).very different mechanisms (Figure 7).Most particularly, a #17-24 canyon with quick evolution and accurate assignments exists in Figures 4 and 6.A rift in the canyon floor: the role of fus.Codes requiring the least selection [14] to become the SGC are likely precursors to the historical code.Thus, further resolution comes from a more precise measure of distance to the SGC, incorporating both speed and accuracy.In Figure 8, such an index is implemented for the eight canyon codes (Figures 4 and 7), using as distance metric the fraction of codes that encode ≥20 functions and are simultaneously accurate: mis0, mis1, or their sum. Reading the upper legend of Figure 8, there is a large difference in codes that fuse (fus) and those that do not (nofus).Maximally complete codes via fusion (#17-20) are about an order of magnitude more abundant than via non-fusing pathways (#21-24).This parallels previous findings [10,12] that most complete codes come from code fusion.A nocc div fus canyon subset (Figure 8) of pathways implements the doubly capable code evolution implied by the speed-accuracy correlation (Figure 7).Implications of a flat canyon floor.Differences between tab/notab and wob/nowob codes are dramatically curtailed within the nocc div canyon, where these variations have their smallest observed effects (Figures 4 and 7).Such small effects are of evolutionary importance in two ways. The first relates to wobble: how can one rationalize the universal adoption of wobble coding when it is everywhere unfavorable (see Wobble is always inhibitory above)?One response is that wobble is likely delayed [12], but another is that there exist pathways (#17-20, Figures 4, 7 and 9) where wobble has a minimal negative effect.Wobble introduced late in pathway #18 or 20 would not be selected against. Comparing the best pathways.Small canyon-floor differences between tab/notab pathways are also evolutionarily significant.Routes #18 and 20 host codes that reach the SGC most quickly (Figure 4) while also preserving high accuracy (Figure 6).When speed and accuracy are required together (Figure 8), #18 and 20 are again best.What does this multiple superiority mean? Figure 8 shows that #18 and #20 environments differ only for independent codes-it is somewhat better to avoid them.This is puzzling because more independent codes provide a broader sample of the coding environment and are generally expected to find the SGC sooner [13].Moreover, multiple codes can fuse, quickly forming more complete codes by summing compatible assignments [10,12].Figure 8, therefore, suggests that something subtle makes path #20 (nocc div fus notab nowob) best, and in particular, superior to #18 (nocc div fus tab nowob). Multiple codes are more advantageous if they resemble each other.The subtlety is in the nature of "other" codes.When independent codes fuse, they assemble complete, accurate code tables significantly more rapidly [10].However, code division creates a new kind of fusion partner.Figure 11 illustrates this, using a pathway containing both independent and division fusion partners (Figure 1A).As code division increases in Figure 11, the fraction of codes with successful fusion among environmental codes increases.Even more relevantly, unsuccessful fusions (annihilations via conflicting assignment) decrease, and by the same proportion as fusion increase.Figure 11 s two plots mirror each other.Especially apt fusion partners from code division replace fusion to independently arising codes to make up the approximately constant number of assignments required for complete code construction (Figure 1A).At all levels of code division, the quickest (Figure 1A) and most accurate SGC-like evolution (Figure 3) is associated with the greatest successful (Figure 11, square) and least unsuccessful (Figure 11) code fusion. With time, dividing, highly related code numbers increase, so variants of a dividing code will be made and tested more rapidly.This resembles the 'crescendo' of competent codes created by fusion with increasing numbers of unrelated codes [10].In this work, novel partner codes originate by division and subsequent evolution, but the result is similar: an era when highly complete, highly accurate codes proliferate.SGC selection can survey a second kind of prolonged fusion-division crescendo (Figure 11), during which many different but related SGC-like codes are exposed to selection. Pathway #20 simplifies SGC evolution.Thus, the "disadvantage" of fusion between independent codes is only that a better path exists: a dividing code population harvests evolutionary change by fusing evolved ancestral codes and varied descendants.Most especially, this effect makes evolution from a unique origin (pathway #20) somewhat more efficient than fusing with independent codes (pathway #18).Simpler primordial code emergence by the least SGC selection from a single ancestor is plausible. Fusion yields hybrid routes to the SGC. Figure 1 s varying division rates move code evolution along an axis joining pathways #18 and #20.Code division increases, independent code fusion decreases (toward #20), or the reverse (toward #18) with a small change in result (Figure 8).Hybrid routes with similar SGC access suggest novel possibilities.Specifically, pathways #18 and 20 approach the SGC by fusing early coding tables from differing origins.Therefore, these pathways suggest that partial codes from other origins could be fused. The SGC can have an even earlier history [20], but the early code usually becomes structured in one of four ways.'Frozen accidents': Crick [21] supposed that a code could be frozen, perhaps after being shaped by earlier molecular interactions.In any case, a growing code would ultimately become difficult to change because changes would perturb all previous gene products [22].'Coevolution': reference [23] emphasizes that it is undeniable that code progress could have been shaped by metabolic evolution, more complicated amino acids encoded only after progressive biosynthesis reaches them.This is a highly developed theory [24][25][26][27] often called coevolution of the genetic code.'Error minimization': a code or partial code might be shaped by selection to minimize the effects of coding errors or mutations [28,29].Strikingly, error minimization can arise without selection against Figure 1 . Figure 1.(A).Effects of code division on time to evolve ≥20 encoded functions, and on number of initial assignments required for ≥20 encoded functions.The pathway x-axis follows a structured list of code division variables (see text) named in the box at graph top: "Pdiv" = probability of unselected code division/passage, "encode div with" = code completeness (cc) required to encode accurate code division. Figure 1 . Figure 1.(A).Effects of code division on time to evolve ≥20 encoded functions, and on number of initial assignments required for ≥20 encoded functions.The pathway x-axis follows a structured list of code division variables (see text) named in the box at graph top: "Pdiv" = probability of unselected code division/passage, "encode div with" = code completeness (cc) required to encode accurate code division.Pmut = 0.00975, Pdec = 0.00975, Pinit = 0.15, Prand = 0.05, Pfus = 0.001, Ptab = 0.08, Pwob = 0.0-results show means for evolution in 500 environments.(B).Mean time to evolve ≥20 encoded functions versus mean number of code divisions for those codes.A square marks shortest evolutionary time.Environments are those in (A). Figure 2 . Figure 2. Mean mis and mean fraction of codes with SGC-like assignments are closely, and inversely, related.mis0 = identical to SGC assignments; mis1 = one difference from SGC assignments.Environments are those in Figure 1A. Figure 2 . 21 Figure 3 . Figure 2. Mean mis and mean fraction of codes with SGC-like assignments are closely, and inversely, related.mis0 = identical to SGC assignments; mis1 = one difference from SGC assignments.Environments are those in Figure 1A.Life 2023, 13, x FOR PEER REVIEW 7 of 21 Figure 3 . Figure 3. Effects of division variables (Pdiv and cc) on accuracy of evolution to ≥20 encoded functions.The x-axis is a structured list of pathways like in Figure 1A.Environments are those in Figure 1A.A square marks the most accurate pathway. Life 2023 , 13, x FOR PEER REVIEW 8 of 21 Figure 4 . Figure 4. Logarithmic mean time to evolve ≥20 encoded functions for 32 potential code evolution pathways.Pathway mechanism abbreviations are listed at graph top:.cc= require completeness criterion for code division, nocc = no cc required; div = allow code division with probability Pdiv, nodiv = no code division; fus = allow codes to fuse with probability Pfus, nofus = no code fusion; tab = allow independent environmental coding tables, origin probability Ptab; notab, no parallel tables; wob = allow wobble coding (no vertical line through point), nowob = simple base pairing (vertical line); A shaded bar below the x-axis marks the favored nocc div canyon.Numerical data are presented in a supplementary data file, RFW_supp_data_a.xlsx. Figure 4 . Figure 4. Logarithmic mean time to evolve ≥20 encoded functions for 32 potential code evolution pathways.Pathway mechanism abbreviations are listed at graph top:.cc= require completeness criterion for code division, nocc = no cc required; div = allow code division with probability Pdiv, nodiv = no code division; fus = allow codes to fuse with probability Pfus, nofus = no code fusion; tab = allow independent environmental coding tables, origin probability Ptab; notab, no parallel tables; wob = allow wobble coding (no vertical line through point), nowob = simple base pairing (vertical line); A shaded bar below the x-axis marks the favored nocc div canyon.Numerical data are presented in a Supplementary Data File, RFW_supp_data_a.xlsx. Figure 6 . Figure 6.Mean misassignment (mis) at ≥20 encoded functions for 32 code pathways.Pathway nism abbreviations are listed at graph top.Environments are those in Figure 4.The shaded bar the x-axis marks the favored nocc div canyon. Figure 6 . Figure 6.Mean misassignment (mis) at ≥20 encoded functions for 32 code pathways.Pathway mechanism abbreviations are listed at graph top.Environments are those in Figure 4.The shaded bar beneath the x-axis marks the favored nocc div canyon. Figure 7 . Figure 7. Mean misassignment at ≥20 encoded functions versus time for its evolution in passages, for 32 code evolution pathways.Environments are those in Figure 4. convey statistically valid differences.Pathway #20 really arrives at ≥20 encoded functions before #18.However, this significance leaves open an essential question. Figure 7 . Figure 7. Mean misassignment at ≥20 encoded functions versus time for its evolution in passages, for 32 code evolution pathways.Environments are those in Figure 4. Figure 8 . Figure 8. Fraction of codes with both ≥20 encoded functions and mis0 or mis1, or ≥20 encoded functions with any mis.All environments have run for 121 passages, the mean time for pathway #20 to reach ≥20 encoded functions.The shaded bar beneath the x-axis marks the superior nocc div fus section of the nocc div canyon pathways.Conditions are those of Figure 1, except Ptab = 0.08 or 0.0, Pwob = 0.005 or 0.0.Fractions are mean proportions of 1000 environments. Figure 8 . Figure 8. Fraction of codes with both ≥20 encoded functions and mis0 or mis1, or ≥20 encoded functions with any mis.All environments have run for 121 passages, the mean time for pathway #20 to reach ≥20 encoded functions.The shaded bar beneath the x-axis marks the superior nocc div fus section of the nocc div canyon pathways.Conditions are those of Figure 1, except Ptab = 0.08 or 0.0, Pwob = 0.005 or 0.0.Fractions are mean proportions of 1000 environments. Figure 9 . Figure 9. Independence of code fusion and division.Fraction of observed codes using fusion times (indicated with *) fraction of observed codes using division versus fraction observed codes using division and fusion together.Environments are those in Figure 4. Figure 11 . Figure 11.Successful and unsuccessful fusions (annihilations) have complementary behavior.Only code division varies; pathways and conditions are those of Figure 1A, as indicated by abbreviations at plot top.Square points are the same as in Figures 1A,B and 3. Figure 11 . Figure 11.Successful and unsuccessful fusions (annihilations) have complementary behavior.Only code division varies; pathways and conditions are those of Figure 1A, as indicated by abbreviations at plot top.Square points are the same as in Figure 1A,B and Figure 3.Moreover, the common use of code fusion by #17-20, the four most probable of 32 SGC pathways, supports the necessity of merging primordial codes, initially proposed for other reasons [10,12].While differences among Figure 8 s complete and accurate codes are not large, pathway #20 (nocc div fus notab nowob) is again superior, implying the least selection to evolve the SGC.Implications of a flat canyon floor.Differences between tab/notab and wob/nowob codes are dramatically curtailed within the nocc div canyon, where these variations have their smallest observed effects (Figures4 and 7).Such small effects are of evolutionary importance in two ways.The first relates to wobble: how can one rationalize the universal adoption of wobble coding when it is everywhere unfavorable (see Wobble is always inhibitory above)?One response is that wobble is likely delayed[12], but another is that there exist pathways (#17-20, Figures4, 7 and 9) where wobble has a minimal negative effect.Wobble introduced late in pathway #18 or 20 would not be selected against.
v3-fos-license
2017-11-02T18:38:23.689Z
2017-05-17T00:00:00.000
80452825
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jrcm.tbzmed.ac.ir/PDF/JARCM_1156_20170421200944", "pdf_hash": "78649a1c19d47f086dbd509e97f1bb1423165100", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1237", "s2fieldsofstudy": [ "Medicine" ], "sha1": "78649a1c19d47f086dbd509e97f1bb1423165100", "year": 2017 }
pes2o/s2orc
Clinical dilemma of acute abdomen in patients with systemic lupus erythematosus : A case report © 2017 The Authors; Tabriz University of Medical Sciences This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Clinical dilemma of acute abdomen in patients with systemic lupus erythematosus: A case report Citation: Ghavidel A. Clinical dilemma of acute abdomen in patients with systemic lupus erythematosus: A case report.J Anal Res Clin Med 2017; 5 (2): 69-74.Doi: 10.15171/jarcm.2017.013 Gastrointestinal (GI) presentations may be seen in approximately one third of patients with rheumatologic disorders.Some of clinicopathologic findings may be mostly nondiagnostic and usually show intestinal tract involvement by rheumatologic inflammation, and adverse drug reactions of treatment agents.Bowel discomfort, associated by emesis and anorexia, is seen in up to one third of cases with SLE.The cause of abdominal pain does not differ significantly from that in patients without SLE.Special attention should be given to disorders that may accompany lupus such as lupus peritonitis, infection, inflammatory bowel disease, pancreatitis, and mesenteric vasculitis with intestinal infarction.In immunocompromised cases, infestation with opportunistic microorganisms like cytomegalovirus may cause abdominal pain and GI catastrophes like upper GI bleeding. 1,2 usually forgotten etiology of abdominal pain in SLE is lupus peritonitis.It is rarely reported, but autopsy studies have shown that 60 to 70 percent of patients with SLE have had an episode of peritoneal attack at some time of disease history.If peritoneal involvement is asscociated with frank rebound tenderness on physical examination and computed tomography (CT) scan documented intraperitoneal fluid, fluid tap is warranted to rule out infection.In literature review, sometimes abdominal pain of undiagnosed pathology responds to glucocorticoids, proposing an inflammatory origin.Thus, if a high gradient is found on peritoneal examination, a course of steroids (60 mg of prednisone per day) may be suggested if clinical finding is moderate to severe.Presence of peritoneal fluid is unusual in SLE.When present, visceral perforation with and without peritonitis should be excluded by paracentesis.Other causes of ascites like congestive heart failure and hypoalbuminemia related to lupus must be excluded.These causes may be due to the nephrotic syndrome or protein-losing enteropathy in some patients. 3,4ysphagia is the most prevalent GI symptom in SLE and is usually due to organ hypomotility.The other presentations include esophageal stricture, gastroesophageal reflux, esophageal candidiasis, esophageal ulcers, and medication side-effects.Radiographic and manometric evaluation of the esophagus may disclose abnormal motility, while endoscopy and radiography of the esophagus can show other curable pathologies of dysphagia.The treatment of esophageal symptoms depends on predisposing cause. 5esenteric vasculitis is potentially life-threatening.The presentation may be insidious with lower abdominal pain.Radiographies and abdominal CT scans may show nonspecific but suggestive findings, but arteriography is usually required for diagnosis.Treatment includes antibiotics, high dose glucocorticoids, and intravenous cyclophosphamide. 6,7ancreatitis occurs in less than 10 percent of patients, usually in those with active SLE elsewhere.The presentation does not differ from patients without SLE and includes upper abdominal pain, nausea and vomiting, and an increase in serum amylase.The differential diagnosis of abdominal pain and an elevated amylase should include mesenteric infarction, perforated peptic ulcer, ruptured ectopic pregnancy, some tumors, and renal failure.Imaging studies (e.g., CT scan or ultrasound) often help to confirm the diagnosis.Some patients require glucocorticoids in addition to usual medical treatment. 8,9e potential causes of either liver enzyme abnormalities or overt liver disease include SLE itself ("lupus hepatitis"), nonsteroidal anti-inflammatory drugs (NSAIDs), and coincidental disease.Liver chemistry abnormalities may resolve with cessation of NSAID or treatment of active SLE.Liver abnormalities are common; jaundice is rare and may reflect hemolysis rather than liver disease.Some liver diseases in SLE may be severe and progressive.The term "lupoid hepatitis" refers to autoimmune hepatitis rather than liver involvement in SLE. 10,11rotein-losing enteropathy in patients with SLE has been noted in a number of small series and case reports.It is estimated that half of patients have diarrhea.It may represent the first manifestation of SLE.Patients typically respond well to glucocorticoids or immunosuppressants. 12young woman was visited at our center because of acute belly discomfort; she had been well until 4 weeks ago from her presentation.She developed generalized abdominal pain accompanied with nausea and vomiting, and complaining of abdominal discomfort.She was visited at the emergency department by a general surgeon.The musculoskeletal examination disclosed peripheral polyarthralgia, and based on the findings acute abdomen was taken into account.She was managed with nasogastric aspiration, rehydration, antibiotics, and nutritional support treatment.The differential diagnosis was discussed and she was finally diagnosed with acute appendicitis.She was transferred to operating room.After explorative laparotomy, emergent appendectomy was performed with diagnosis of perforated appendicitis.But her abdominal pain with nausea and vomiting was continued and she was transferred to our center. The previous laboratory findings of patient are shown in the table 1-4. Special consideration was directed to disorders that might have be associated with Plain chest X-ray showed bilateral basal pleural effusion.Imaging findings are described below (Figure 1).SLE can affect the entire GI tract. 1 GI presentations may be seen in approximately one third of cases.Some of clinicopathologic findings may be mostly nondiagnostic and usually show intestinal tract involvement by lupus and adverse drug reactions of treatment agents. The most important problem is excluding surgical abdomen.The cause of abdominal pain does not differ significantly from that in patients without lupus. 4 Special attention should be given to disorders that may accompany lupus such as lupus peritonitis, infection, inflammatory bowel disease, dyspepsia, pancreatitis, and mesenteric vasculitis with intestinal infarction.In immunocompromised cases, infestation with opportunistic microorganisms like cytomegalovirus may cause abdominal pain and GI catastrophes like upper GI bleeding. 5n often overlooked cause of abdominal pain in SLE is lupus peritonitis.It is rarely suspected, but autopsy studies show that 60 to 70 percent of patients with SLE have had an episode of peritoneal attack at some time of disease history.If peritoneal involvement attack is with frank rebound tenderness on physical examination and CT scan documented intraperitoneal fluid, fluid tap is warranted to rule out infection.In literature review, sometimes abdominal pain of undiagnosed pathology responds to glucocorticoids, showing an inflammatory cause. 6Thus, if a high gradient is found on peritoneal examination, a course of steroids (60 mg of prednisone per day) may be suggested if clinical finding is moderate to severe.Presence of peritoneal fluid is unusual in SLE.When present, visceral perforation with and without peritonitis should be excluded by paracentesis.Other causes of ascites like congestive heart failure, hypoalbuminemia related to lupus must be excluded.These causes may be due to the nephrotic syndrome or protein-losing enteropathy in these patients.Dyspepsia has been noted in 11 to 50 percent of patients with SLE, while peptic ulcers (usually gastric) are present in 4 to 21 percent.These complications are more common in patients treated with NSAIDs, but SLE itself may also predispose to ulcer formation.Glucocorticoids also increase the incidence of dyspepsia. 8,9esenteric vasculitis is potentially life-threatening. 13Radiographs and abdominal CT scans may show nonspecific but suggestive findings, but arteriography is usually required for diagnosis.Treatment includes antibiotics, high dose glucocorticoids, and intravenous cyclophosphamide. 10,11ancreatitis occurs in less than 10 percent of patients, usually in those with active SLE elsewhere.The presentation does not differ from patients without SLE and includes upper abdominal pain, nausea and vomiting, and an increase in serum amylase. 12,14The differential diagnosis of abdominal pain and an elevated amylase should include mesenteric infarction, perforated peptic ulcer, ruptured ectopic pregnancy, some tumors, and renal failure.Imaging studies (e.g.CT scan or ultrasound) often help to confirm the diagnosis.Some patients require glucocorticoids in addition to usual medical treatment. 13,15,16 symptoms and signs may be seen in approximately one third of patients with rheumatologic disorders as primary presentation.Some of these findings may be nondiagnostic and may be clinical diagnostic challenge.GI tract involvement by SLE must be differentiated from adverse drug reactions of treatment agents.Abdominal pain, associated with nausea and vomiting, is seen in up to 30 percent of patients with SLE.Special attention should be given to disorders that may accompany lupus such as lupus peritonitis and infection.If the physician has a high clinical suspicion of this diagnosis, prompt treatment with corticosteroids is very important.This suspicion may prevent the catastrophic result in patients with GI involvement of SLE and improve prognosis in this high-risk patient population. The author would like to acknowledge those who helped preparing this article. The author prepared this article alone.None. None. The author has made his best effort to not reveal any information showing the identification of the patient and keep it confidential. Figure 1 . Figure 1.Plain chest X-ray showed bilateral basal pleural effusion with passive lung collapse, abdominal CT scan showed intestinal wall edema and circumferential thickening of small and large bowels wall, splenomegaly and ascites Table 1 . Laboratory findings of the patient She was diagnosed to have SLE with skin and joint involvement and pancytopenia and was treated with steroids and chloroquine and was stable until 4 weeks ago when she presented to our center.With using the American Rheumatism Association (ARA) criteria for the diagnosis of SLE, it was suggested that patient had classical SLE.After 2 years regular follow up, unfortunately she did not continue medical visits.Until 4 weeks ago her present presentation appeared as discussed above.It is clear that the patient's major presentations were due to SLE peritonitis.Laboratory findings including of ascites fluid analysis are shown in the table 4. RDW: Red blood cell distribution width; PDW: Platelet distribution width; MPV: Mean platelet volume; P-LCR: Platelet large cell ratio; M: Male; F: Female lupus such as infection and lupus peritonitis. Table 2 . Laboratory findings of the patient Table 3 . Laboratory findings of the patient Table 4 . Laboratory findings of the patient
v3-fos-license
2023-07-12T06:09:38.841Z
2023-07-01T00:00:00.000
259664109
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4425/14/7/1388/pdf?version=1688200073", "pdf_hash": "f564f9958c6b35224c80988fd6e34f1a14ba4b7f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1239", "s2fieldsofstudy": [ "Biology" ], "sha1": "e25a82a0e4a37bf44ded2c3c283e3f5d9c1d923d", "year": 2023 }
pes2o/s2orc
Molecular Cloning of Toll-like Receptor 2 and 4 (SpTLR2, 4) and Expression of TLR-Related Genes from Schizothorax prenanti after Poly (I:C) Stimulation Toll-like receptor (TLR) signaling is conserved between fish and mammals, except for TLR4, which is absent in most fish. In the present study, we aimed to evaluate whether TLR4 is expressed in Schizothorax prenanti (SpTLR4). The SpTLR2 and SpTLR4 were cloned and identified, and their tissue distribution was examined. The cDNA encoding SpTLR4 and SpTLR2 complete coding sequences (CDS) were identified and cloned. Additionally, we examined the expression levels of seven SpTLRs (SpTLR2, 3, 4, 18, 22-1, 22-2, and 22-3), as well as SpMyD88 and SpIRF3 in the liver, head kidney, hindgut, and spleen of S. prenanti, after intraperitoneal injection of polyinosinic-polycytidylic acid (poly (I:C)). The SpTLR2 and SpTLR4 shared amino acid sequence identity of 42.15–96.21% and 36.21–93.58%, respectively, with sequences from other vertebrates. SpTLR2 and SpTLR4 were expressed in all S. prenanti tissues examined, particularly in immune-related tissues. Poly (I:C) significantly upregulated most of the genes evaluated in the four immune organs compared with the PBS-control (p < 0.05); expression of these different genes was tissue-specific. Our findings demonstrate that TLR2 and TLR4 are expressed in S. prenanti and that poly (I:C) affects the expression of nine TLR-related genes, which are potentially involved in S. prenanti antiviral immunity or mediating pathological processes with differential kinetics. This will contribute to a better understanding of the roles of these TLR-related genes in antiviral immunity. Introduction The immune system of vertebrates includes the innate and adaptive immune systems, which are essential in fish immunity [1]. In fish, innate pattern recognition receptors (PRRs) activate the innate immune response through a series of highly conserved pathogenassociated molecular patterns [2]. The PRRs in fish include the RIG-I-like receptor, NOD-like receptor, C-type lectin receptor, and the toll-like receptor (TLR) family [3]. Among the PRR families, the TLR family is the most widely studied [4]. These receptors were first identified in Drosophila melanogaster in 1985 [5]. The TLR family is divided into six subfamilies, based on evolutionary relatedness: TLR1, 3, 4, 5, 7, and 11 subfamily. To date, at least 22 TLRs have been cloned and identified in bony fish (TLR1, 2, 3, 4, 5M, 5S, 7, 8, 9, 13, 14, and 18-28), some of which are bony fish-specific TLRs, such as TLR18-28 [6,7]. In this study, another two TLRs, the SpTLR2 (belonging to the TLR1 subfamily) and SpTLR4 (belonging to the TLR4 subfamily) were cloned and identified. Gene cloning and functional identification of fish-specific TLRs have also become research hotspots. Studying fish TLRs is necessary for understanding the immune system of lower vertebrates. TLRs play crucial roles in the identification of microbial pathogens that infect fish. Together with myeloid differentiation factor 88 (MyD88), interferon regulatory factors (IRFs), and other factors in the immune signaling pathway, TLRs are involved in the identification of most pathogenic microorganisms, including bacteria, viruses, and parasites [8][9][10][11]. The adapter molecules are recruited by the toll/IL-1 receptor (TIR) domain of TLR during TLR signal transduction, leading to the activation of diverse signaling pathways. These signaling pathways involving TLRs can be divided into two categories: MyD88-dependent and MyD88-independent pathways. In the former pathway, MyD88 acts as an adaptor protein and is recruited by TLRs as the first signaling protein, playing a key role in TLR signal transduction [12][13][14]. The MyD88-independent pathway is a specific signaling pathway involving only a few TLRs, mainly related to antiviral signaling, also known as the TIR-domain-containing adaptor-inducing interferon (IFN)-β-dependent pathway [7,10]. For example, TLR3 and TLR22 activate IRF3 and IRF7 to complete the immune response via a TIR-domain-containing adaptor-inducing IFN-β-dependent pathway [2,15]. IRF family members (IRF1-11) have immunoregulatory functions; IRF3 plays a crucial role in the innate immune resistance system against viruses [16] and in the MyD88-independent pathway [17]. Prenant's schizothoracin (Schizothorax prenanti) belongs to the fish family Cyprinidae, known locally as "yang-fish" in Hanzhong city (Shaanxi, China) or "ya-fish," together with S. davidi, in Ya'an city (Sichuan, China). As rare and high-quality cold-water fish in production areas, economically important fish have been artificially cultivated and marketed for consumption at approximately 120 yuan/kg [31]. Because of intensive feeding, yang-fish are susceptible to bacterial infections, such as Aeromonas hydrophila [32,33] or Streptococcus agalactiae [34], as well as reoviruses [35], which hinder the healthy development of yang-fish farming. In this study, we aimed to evaluate the expression of TLR2 and TLR4 in S. prenanti in physiological conditions and after induction of antiviral response mechanisms with a poly (I:C) challenge. Poly (I:C) is a viral analog that has been shown to trigger innate and adaptive immune responses involving TLR signaling, depending on the species [36][37][38]. We first confirmed the existence of TLR4 in yang-fish and its secondary structure composition; furthermore, we predicted the 3D-structural models of SpTLR4 and SpTLR2 proteins. The expression patterns of TLR-related genes in different immune organs (liver, head kidney, hindgut, and spleen), in response to poly (I:C), were analyzed by quantitative real-time PCR (qRT-PCR). The genes we analyzed included SpTLR2, 3, 4, 18, 22-1, 22-2, 22-3, SpIRF3, and SpMyD88. Our findings contribute to further clarifying the roles of SpTLRs, SpIRF3, and SpMyD88 in the immune mechanisms of fish and to a better understanding of the function of SpTLR4. Animal Treatments Healthy S. prenanti (121.7 ± 28 g) were purchased from the Qunfu Yang-fish professional breeding cooperative (Hanzhong, China). The experimental fish were kept in glass tanks with a volume of (60 × 30 × 40) cm 3 with aerated tap water at a temperature of 20 ± 1 • C. Twelve fish were placed per tank, and feed conditions for the fish refer to our previous research [31] After 10 days of acclimatization, S. prenanti in the test group were stimulated with intraperitoneal injection of poly (I:C) (P1530, Sigma-Aldrich, St. Louis, MO, USA) at a dose of 5 mg/kg body weight. Fish in the control group were injected with the same dose of phosphate-buffered saline (PBS). To evaluate the expression of SpTLR2, 3,4,18,, SpIRF3, and SpMyD88 under poly (I:C) stimulation, anatomical samples of the poly (I:C)-injected animals were obtained at 12 and 24 h after infection (4 animals per time point). Four PBS-injected fish were used as controls and their tissues were collected at the 24 h. The fish were anesthetized with 80 mg/L eugenol (Daoyuan Biotechnology Co. LTD, Guangzhou, China) for 3 min before dissection. The heart, head kidney, liver, hindgut, intraperitoneal fat, muscle, and spleen were sampled and preserved in liquid nitrogen. RNA Extraction and cDNA Synthesis Tissue total RNA was extracted using TRIzol reagent (Invitrogen, Waltham, MA, USA). The concentration of total RNA and purity was determined with agarose gel electrophoresis, and an A260/280 ration was determined using a Nanodrop One spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). The cDNA was synthesized from total RNA with the RevertAid First Strand cDNA Synthesis Kit (Thermo Fisher Scientific) following to the manufacturer's instructions. CDS Cloning of SpTLR2 and SpTLR4 Based on the transcriptome sequencing of S. prenanti and the sequences from Cyprinidae fish, specific TLR2 and TLR4 primers were designed (Table 1) with the help of PrimerQuest. The primers were synthesized by Tsingke Biotechnology Co., Ltd. (Xi'an, China). The SpTLR2 and SpTLR4 genes were amplified using PrimerStar Max DNA polymerase (Takara Bio, Shiga, Japan) with spleen cDNA as template. The PCR was carried out as follows: 35 cycles of denaturing under 98 • C for 10 s, annealing under 50 • C for 15 s, and extending under 72 • C for 40 s. A tailing was added to the 3' end of the PCR product with the DNA A-Tailing Kit (Takara Bio). The products were ligated with pMD19-T vector (Takara Bio) and then transformed into competent cells of Escherichia coli DH5α (TIANGEN, Beijing, China). Then competent cells were cultured on an LB agar plate (containing 100 mg/L ampicillin) at 37 • C. Subsequently, positive bacterial clones were sequenced to confirm the cloning. [39], and the phylogenetic trees of TLR2 and TLR4 from different vertebrates were constructed using the neighbor-joining method with the MEGA 11.0 software [40]. Tissue Distribution of SpTLR2 and SpTLR4 mRNA Total RNA was isolated from the different tissues (heart, liver, spleen, intraperitoneal fat, head kidney, muscle, and hindgut) and cDNA was prepared as described. A qRT-PCR was performed using FastStart Essential DNA Green Master (Roche, Basel, Switzerland) on Applied Biosystems Step One Plus (Life Technologies, Carlsbad, CA, USA). The primers used in this study are listed in Table 1. S. prenanti-specific actin primers were used as an internal control. Triplicat analyses of SpTLR2, SpTLR4, and actin mRNA expression were performed for all samples, and the data were analyzed according to the 2 −∆∆CT method [41]. Tissues collected from poly (I:C)-challenged animals were processed and analyzed using the same methods to assess the changes in the expression levels of TLR-related genes, induced by the viral analog. Statistical Analysis SPSS 22.0 (IBM Corp., Armonk, NY, USA), and GraphPad Prism 5 (GraphPad Software Inc., San Diego, CA, USA) software were performed for data analysis and visualization, respectively. The mRNA expression abundance was analyzed using a one-way analysis of variance. All data are presented as the mean ± standard error (n = 4); statistical significance was established as p < 0.05. Identification and Structural and Phylogenetic Analysis of SpTLR2 and SpTLR4 First, we set out to confirm the expression of TLR2 and TLR4 in S. prenanti. We observed the expression of both genes and investigated their features. We found that the CDS length of SpTLR2 was 2379 bp (GenBank accession no. OQ676992), and the predicted SpTLR2 ORF encoded a protein with 792 amino acids. The calculated molecular mass and theoretical isoelectric point of SpTLR2 was 199.19 kDa and 4.89, respectively. Domain architecture analysis of SpTLR2, using the SMART tool (http://smart.emblheidelberg.de/, accessed on 28 February 2023), revealed the presence of canonical structural motifs in TLR family proteins. These include a signal peptide, six LRRs, and one TIR domain; this SpTLR2 domain structure is similar to that of other vertebrate TLR2s ( Figure 1). For SpTLR4, we found that the CDS length was 2343 bp (GenBank accession no. OQ108869). The predicted SpTLR4 ORF encoded a 780 amino acid protein, and its calculated molecular mass and theoretical isoelectric point were 197.68 kDa and 4.93, respectively. Using the SMART tool, we identified the following structural domains in SpTLR4: six LRRs, one transmembrane domain, and one TIR domain ( Figure 2). The TLR4 domain regions in other vertebrates are shown in Figure 2. Figure 3 shows the secondary structure and predicted 3D structure of SpTLR2 (Figure 3a,b, respectively) and SpTLR4 (Figure 3c,d, respectively). Similar to SpTLR2, SpTLR4 has a horseshoe-shaped solenoid structure with parallel β-sheet lining the inner circumference and α-helices flanking its outer circumference. . Tissue Distribution of SpTLR2 and SpTLR4 Expression We quantified SpTLR2 and SpTLR4 mRNA expression in the eight tissues (heart, head kidney, spleen, liver, muscle, gill, hindgut, and intraperitoneal fat) of 4 fish using qRT-PCR to determine the transcript abundance of both TLRs. The liver, head kidney, spleen, and hindgut of fish are generally regarded as immune organs that mediate the immune response [42]. The mRNA abundance of β-actin was used for normalization. The expres- To infer the evolutionary relationships between SpTLR2 and SpTLR4, a phylogenetic tree was constructed based on the alignment of SpTLR2 and SpTLR4 amino acid sequences with other available vertebrate amino acid sequences for these two proteins. The SpTLR2 amino acid sequence was most similar to that of fish and was closest to the golden mahseer (Tor putitora) TLR2, with 96.21% identity. We analyzed the phylogeny of the SpTLR2 and SpTLR4 amino acid sequences to determine the relationships between S. prenanti and other vertebrates based on sequences in the GenBank database ( Figure 4). The results revealed a high TLR2 and TLR4 amino acid sequence identity between S. prenanti and the fish of the cyprinid family, to which both S. prenanti and T. putitora belong to. Similar results were obtained for SpTLR4. Tissue Distribution of SpTLR2 and SpTLR4 Expression We quantified SpTLR2 and SpTLR4 mRNA expression in the eight tissues (heart, head kidney, spleen, liver, muscle, gill, hindgut, and intraperitoneal fat) of 4 fish using qRT- Tissue Distribution of SpTLR2 and SpTLR4 Expression We quantified SpTLR2 and SpTLR4 mRNA expression in the eight tissues (heart, head kidney, spleen, liver, muscle, gill, hindgut, and intraperitoneal fat) of 4 fish using qRT-PCR to determine the transcript abundance of both TLRs. The liver, head kidney, spleen, and hindgut of fish are generally regarded as immune organs that mediate the immune response [42]. The mRNA abundance of β-actin was used for normalization. The expression of splenic SpTLR2 was the highest, followed by the heart and intraperitoneal fat. Conversely, SpTLR2 levels in the head kidney, hindgut, muscle, and liver were significantly lower (p < 0.05), except in the gills where no significant differences were found. In contrast, SpTLR4 was found to be most abundant in the liver, in which its expression was significantly higher than that for the other seven tissues (p < 0.05); the spleen had the second-highest tissue expression of SpTLR4, which was much higher than those in the other six tissues (p < 0.05). Moreover, the SpTLR4 level in the heart was more pronounced and higher than those in the intraperitoneal fat, head kidney, and muscle tissues (p < 0.05), ( Figure 5). Genes 2023, 14, x FOR PEER REVIEW 9 of 18 sion of splenic SpTLR2 was the highest, followed by the heart and intraperitoneal fat. Conversely, SpTLR2 levels in the head kidney, hindgut, muscle, and liver were significantly lower (p < 0.05), except in the gills where no significant differences were found. In contrast, SpTLR4 was found to be most abundant in the liver, in which its expression was significantly higher than that for the other seven tissues (p < 0.05); the spleen had the secondhighest tissue expression of SpTLR4, which was much higher than those in the other six tissues (p < 0.05). Moreover, the SpTLR4 level in the heart was more pronounced and higher than those in the intraperitoneal fat, head kidney, and muscle tissues (p < 0.05), ( Figure 5). prenanti spleen, heart, intraperitoneal fat, gill head kidney, hindgut, muscle, and liver tissues, as determined using qRT-PCR. The loading control was β-actin. A, b, c, and d means with different letters are significantly different from each other (p < 0.05). Expression of TLR-Related Genes Following Poly (I:C) Challenge To determine the changes in TLR2, 3, 4, 18, TLR22s (22-1, 22-2, and 22-3), MyD88, and IRF3 in S. prenanti tissues at 12 and 24 h after poly (I:C) stimulation, the mRNA levels of the genes were quantified using qRT-PCR in the liver, head kidney, spleen, and hindgut tissues. The results are shown in Figure 6. Expression of TLR-Related Genes Following Poly (I:C) Challenge To determine the changes in TLR2, 3, 4, 18, TLR22s (22-1, 22-2, and 22-3), MyD88, and IRF3 in S. prenanti tissues at 12 and 24 h after poly (I:C) stimulation, the mRNA levels of the genes were quantified using qRT-PCR in the liver, head kidney, spleen, and hindgut tissues. The results are shown in Figure 6. Expression of SpTLR2 The SpTLR2 transcripts in the head kidney increased significantly at the 12 h time point (p < 0.001). Conversely, at the 24 h time point, the level of SpTLR2 was significantly decreased relative to both the control and to the poly (I:C) 12 h conditions (p < 0.05 and p < 0.001, respectively). Similarly, the expression level of hepatic SpTLR2 at 24 h was significantly lower than that in the control group (p < 0.05). In the hindgut, the SpTLR2 mRNA was pronounced and upregulated at 24 h after poly (I:C) stimulation (p < 0.001). Conversely, the spleen was the only organ in which the levels of SpTLR2 were downregulated at both 12 and 24 h post-poly (I:C) challenge (p < 0.001), (Figure 6a). SpTLR22-3 (f), SpTLR18 (g), SpMyD88 (h) and SpIRF3 (i) in S. prenanti liver, head kidney, hindgut, and spleen at 12 and 24 h point after poly (I:C) injection. Values were normalized using β-actin. Statistically significant differences between the groups are marked with asterisks (* p < 0.05, ** p < 0.01, *** p < 0.001). Expression of SpTLR2 The SpTLR2 transcripts in the head kidney increased significantly at the 12 h time point (p < 0.001). Conversely, at the 24 h time point, the level of SpTLR2 was significantly decreased relative to both the control and to the poly (I:C) 12 h conditions (p < 0.05 and p < 0.001, respectively). Similarly, the expression level of hepatic SpTLR2 at 24 h was significantly lower than that in the control group (p < 0.05). In the hindgut, the SpTLR2 mRNA was pronounced and upregulated at 24 h after poly (I:C) stimulation (p < 0.001). Conversely, the spleen was the only organ in which the levels of SpTLR2 were downregulated at both 12 and 24 h post-poly (I:C) challenge (p < 0.001), (Figure 6a). Expression of SpTLR3 In the liver, SpTLR3 levels were significantly higher at both time points than those in the PBS-injection control (p < 0.01 at 12 h; p < 0.001 at 24 h), but no significant difference was found between the two poly (I:C) challenge groups. Similar to the effects observed for SpTLR22-3 (f), SpTLR18 (g), SpMyD88 (h) and SpIRF3 (i) in S. prenanti liver, head kidney, hindgut, and spleen at 12 and 24 h point after poly (I:C) injection. Values were normalized using β-actin. Statistically significant differences between the groups are marked with asterisks (* p < 0.05, ** p < 0.01, *** p < 0.001). Expression of SpTLR3 In the liver, SpTLR3 levels were significantly higher at both time points than those in the PBS-injection control (p < 0.01 at 12 h; p < 0.001 at 24 h), but no significant difference was found between the two poly (I:C) challenge groups. Similar to the effects observed for SpTLR2 in the hindgut, the expression level of the SpTLR3 gene in this organ was significantly upregulated at 12 h, relative to the control (p < 0.01), and downregulated at 24 h, relative to both the PBS-injected group (p < 0.001) and the 12 h poly (I:C)-injection group (p < 0.001). In contrast, SpTLR3 mRNA levels were not significantly different in the head kidney or spleen following poly (I:C) injection (Figure 6b). Expression of SpTLR4 The relative transcript level of SpTLR4 was generally higher than that of most other genes examined. The highest expression level of SpTLR4 was detected in the spleen, at 24 h after poly (I:C) injection (approximately 113-fold of the transcript level in the PBS-injected group; p < 0.001). In addition, the level of hepatic SpTLR4 was significantly upregulated at 12 (p < 0.01) and 24 h (p < 0.001) post-poly (I:C) stimulation. In both the head kidney and hindgut, SpTLR4 mRNA was significantly upregulated at 12 h (p < 0.01) and 24 h (p < 0.05), compared to the PBS-control; when compared to the 12 h time point, the level of SpTLR4 at 24 h was slightly decreased (p < 0.05) in the head kidney (Figure 6c). Expression of SpTLR22s Compared with the PBS control, the relative expression of SpTLR22-1 mRNA at 12 h post-poly (I:C) injection was unchanged in the liver, head kidney, and spleen; at this time point, the only organ showing a significant increase in the level of this transcript was the hindgut (p < 0.05). In contrast, at the 24 h time point, all four organs displayed significant changes in the levels of SpTLR22-1 mRNA (upregulated: liver (p < 0.05), hindgut (p < 0.001), and spleen (p < 0.05); downregulated: head kidney (p < 0.01)) ( Figure 6d). In the head kidney and hindgut, the temporal pattern of expression was similar to that of the SpTLR22-1 transcript (downregulated at 24 h post-injection and upregulated at 12 and 24 h, respectively). Conversely, in the spleen, the transcript was significantly downregulated at both time points (p < 0.001), whereas the level of hepatic SpTLR22-2 was only downregulated at 24 h (p < 0.05) after the poly (I:C) injection (Figure 6e). The expression levels of SpTLR22-3 followed a different pattern than that of SpTLR22-1 or SpTLR22-2. The expression level of SpTLR22-3 was unchanged at either time point in the liver and hindgut. In the head kidney, however, SpTLR22-3 mRNA was significantly increased at both 12 (p < 0.01) and 24 h (p < 0.001) after the poly (I:C) injection. Moreover, the level of splenic SpTLR22-3 was significantly increased but only at the 24 h point (p < 0.001) (Figure 6f). Expression of SpTLR18 The levels of hepatic SpTLR18 were most significantly upregulated at both 12 and 24 h relative to the PBS-injection group (p < 0.001). Additionally, the expression at 24 h was also significantly higher than that at 12 h (p < 0.01). At 12 h post-poly (I:C) challenge, SpTLR18 expression remained unchanged in the kidney, hindgut, and spleen. However, at 24 h, the levels of SpTLR18 were significantly downregulated in the head kidney (compared to PBS, p < 0.05; and to the 12 h, p < 0.01), upregulated in the hindgut (p < 0.01), and remained unchanged in the spleen (Figure 6g). Expression of SpMyD88 The relative levels of SpMyD88 in the liver and hindgut were unchanged. Only the head kidney and spleen displayed different temporal expression patterns of this transcript. Compared to the PBS group, the levels of SpMyD88 in the head kidney were extremely upregulated at 24 h after poly (I:C) stimulation (p < 0.05). The levels of SpMyD88 mRNA in the spleen were significantly upregulated at the 12 h time point (p < 0.05) and then significantly downregulated at 24 h compared to the 12 h stimulation group (p < 0.001) and the PBS-injection group (p < 0.01) (Figure 6h). Expression of SpIRF3 The SpIRF3 transcripts were upregulated in all four tissues at both time points after the poly (I:C) treatment. The expression of hepatic SpIRF3 at 12 and 24 h was pronounced higher than that in the PBS control (p < 0.05 and p < 0.01, respectively). In the head kidney, SpIRF3 mRNA was significantly upregulated at both time points but lower in the 24 h condition, relative to the 12 h post-poly (I:C) group (p < 0.001). The temporal expression patterns of SpIRF3 in the hindgut and spleen were similar; both tissues displayed increased levels in this transcript at 12 h, which remained stable at 24 h the after poly (I:C) injection (hindgut, p < 0.01; spleen, p < 0.001). Additional time points would be required to establish whether these transcripts' expression reached a peak in these organs (Figure 6i). Discussion Our study firstly identified the presence of TLR2 and TLR4 in S. prenanti. The predicted SpTLR2 and SpTLR4 amino acid sequence we describe in this study includes the typical conserved structure of the TLR protein family. Previous studies have confirmed that the LRR domains in TLR proteins are related to the identification of pathogen components and that the number of LRR domains present in the protein sequence varies in different animals [43,44]. SpTLR3, SpTLR5, SpTLR22, and SpTLR25 contain a signal peptide [18,[45][46][47], as does SpTLR2. However, we found no signal peptide in SpTLR4 in this study, which is consistent with TLR4 proteins from other species: TLR4.1, TLR4.2 from Ctenopharyngodon idella [24], and TLR4a from D. rerio [29]. The absence of the signal peptide suggests that these proteins may play a role in the cytoplasm. TLR4 is not expressed in most fish species, probably because of the diversity of environments in which they live and their evolutionary history [28]. Contrary to the mammalian protein, D. rerio TLR4 cannot recognize lipopolysaccharide (LPS) [29], thus indicating that some species of fish have different subtypes of the TLR4 protein, some of which may only function as cytoplasmic pattern recognition receptors. The main function of IRFs is to interfere with viral replication by inducing the production of IFN [48,49]. Some IRFs, such as IRF3 and IRF9, activate IFN-α/β and their downstream pathways in the host's antiviral immune process [50,51]. In fish, IRF3 was first detected in rainbow trout (Oncorhynchus mykiss), and its expression was found to be induced after treatment with poly (I:C) [52]. In Atlantic salmon (Salmo salar), MyD88 interacts with IRF3 and IRF7 to regulate the IRF-induced IFN response [53]. Moreover, IRF3 overexpression greatly induces the transcriptional activity of IFN, and the transcription of type I IFN was regulated by IRF3 after challenged by a double-stranded virus [54]. In this study, we found that SpIRF3 was upregulated in all four tissues examined, especially in the head kidney and liver. These results lend support to the antiviral role exerted by fish IRF3. MyD88 plays a key role in the transduction of TLR-mediated signaling and is frequently evaluated in studies investigating signaling pathways involving TLRs. After yellow drum (Nibea albiflora) [55] and Japanese flounder (Paralichthys olivaceus) were treated with Pseudomonas plecoglossicida and Edwardsiella tarda, respectively, NaMyD88 and PcMyD88 were extremely raised in the kidney and spleen, compared to the expression levels in the corresponding control groups [56]. To date, few studies have evaluated the effect of viruses on MyD88 expression in fish. Of the few available reports, most have focused on changes in the expression level of MyD88 after stimulation with the viral nucleic acid analog poly (I:C). MyD88 levels in the blood cells of Litopenaeus vannamei are lower than those in control conditions, except at 4 h and 12 h after poly (I:C) stimulation. Conversely, white spot virus (dsDNA virus) significantly increases the expression level of LvMyD88 [57], and SAV3 (ssRNA virus) upregulates the level of MyD88 in S. salar spleen, which remains elevated for 28 days [58]. Similar results are presented in this study, whereas the timing of the immune response varies depending on the pathogen and the species of fish. To date, 22 TLRs have been identified in bony fish belonging to six TLR subfamilies: TLR1 (TLR1, 2, 14, 18 (fish-specific), 24, 25, 27, and 28), TLR3 (TLR3), TLR4 (TLR4), TLR5 (TLR5M and 5S), TLR7 (TLR7, 8, and 9), and TLR11 (TLR13, 19, 20, 21, 22, 23, and 26) [7,28,59]. TLR2 forms homodimers or heterodimers with TLR1 and TLR6, recognizes various ligands from bacteria, and participates in viral recognition. TLR2 from Epinephelus coioides participates in the immune response to anti-LPS and poly (I:C) [60]. In the early stage of viral hemorrhagic septicemia virus (VHSV) infection in olive flounder (P. olivaceus), TLR2 and IRF3 are significantly upregulated. Accordingly, we observed that SpTLR2 and SpTLR18 are upregulated following a poly (I:C) challenge, particularly in the head kidney at 12 h and hindgut at 24 h. Our previous study demonstrated that LPS significantly increases SpTLR18 levels [46], and the results of the present study support the likely role of this protein in the innate immune responses of bony fish. Moreover, in the spleen, the expression level of SpTLR2 was significantly downregulated compared to that in the PBS-control group at 12 and 24 h, and the SpTLR18 level was unchanged at these two time points. Based on these observations, we speculate that the upregulation of SpTLR2 and SpTLR18 after poly (I:C) induction may occur at intermediate or later time points; however, validation of this hypothesis requires further investigation. TLR3 is the single member of the TLR3 subfamily. In mammals, TLR3 mediates the antiviral immune response to dsRNA viruses, which is similar to its function in fish. Studies have shown that fish TLR3 genes were significantly upregulated in immune-related tissues and organs infected with viruses or poly (I:C), including D. rerio infected with VHSV [61], renal leukocytes from rainbow trout [62] and large yellow croak (Pseudosciaena crocea) [63], and G. rarus infected with grass carp reovirus (GCRV) [64]. In this study, SpTLR3 transcripts in the liver and hindgut were significantly upregulated 12 h after poly (I:C) induction. These results suggest that fish TLR3 recognizes viruses and plays an important role in the immune response. TLR22 is another fish-specific TLR belonging to the TLR11 subfamily that was first discovered in goldfish in 2003 [65]. Subsequently, TLR22 has been cloned and identified in 17 fish species, including D. rerio [20], P. olivaceus [66], S. salar [67], Fugu rubripes [68], Pseudosciaena crocea [69], C. idella [70], Epinephelus coioides [71], Gadus morhua [72], I. punctatus [26], L. rohita [73], C. mrigala [74], S. aurata [75], Scophthalmus maximus [76], Seriola lalandi [77], C. carpio L. [78], and S. prenanti [18]. Initially, two subtypes of TLR22 (named TLR22-1 and TLR22-2) were discovered in rainbow trout, which have highly similar functions and were called 'twin' TLRs. Subsequently, TLR22-1, -2, and -3 were identified in S. salar (GenBank accession no.: AM233509, FM206383, and BT045774, respectively). These reports, as well as the results from the present study, suggest that fish TLR22 is a multifunctional immune receptor involved in the defense and immune response of almost all pathogenic microorganisms; however, the corresponding recognition mechanisms and downstream signaling pathways remain unclear. To date, only two studies have explored the downstream signaling pathways mediated by TLR22. In the first report, TLR22 of T. rubripes was demonstrated to be located in the cell membrane and to induce IFN expression in response to viral infection [68]. In contrast, another report showed EcTLR22 to be located in the endosome and to mediate protective mechanisms, inhibiting the transmission of antiviral and inflammatory signals to prevent excessive inflammation [79]. These results suggest that TLR22 may have different functions in different fish species. Therefore, additional studies are needed to shed light on the signaling mechanism mediated by TLR22. A previous study reported that the mRNA levels of SpTLR22-1 in the head kidney and spleen were upregulated 12 h after a poly (I:C) challenge, while SpTLR22-3 significantly increased at both 12 and 24 h points in the head kidney; conversely, SpTLR22-2 did not change at either the 12 or 24 h time points [18]. The results described here show some differences relative to this previous report. We found that the transcripts of SpTLR22-3 were much higher than that of TLR22-1 and -2, especially in the head kidney and spleen, at 24 h after infection. Our findings suggest that the 'triplet' SpTLR22s (TLR22-1, -2, and -3) jointly mediate the recognition of poly (I:C) and are involved in the immune response. The biggest difference in TLR-signaling pathways between mammals and fish pertains to the TLR4-mediated signaling pathway [80]. TLR4 is a direct receptor of bacterial LPS [2]. Unlike in mammals, TLR4 is absent in most fish and is mainly found in cyprinids. This discrepancy in protein expression raises the question of whether mammal and fish TLR4 have similar functions in viral recognition. In C. idella infected with GCRV, the expression of TLR4 has been reported to be increased in the muscle and liver [24]. Similar results have been observed in G. rarus infected with GCRV [22]. Our present findings support these previous reports. We found that the relative transcript level of SpTLR4 was generally higher, compared to the other genes examined, and that its expression was highest in the spleen at 24 h after the poly (I:C) stimulation relative to the control group. These results suggest that fish TLR4 expression is induced in response to viral infection and may play a crucial role in the immune response not just in antimicrobial immunity. However, its ligand specificity and function require further study. Overall, our study found evidence of the following: TLR4 is present in S. prenanti; SpTLR4 is involved in antiviral immunity; the spleen is the most sensitive immune organ for SpTLR4 detection at 24 h following the poly (I:C) injection. Of the nine genes examined in this study, the upregulation of SpTLR4 expression was higher, especially for the level in the spleen at 24 h which significantly increased 110-fold. In addition, the SpTLR3 and SpTLR18 in the spleen and SpTLR22-3 and SpMyD88 in both the liver and hindgut were noninducible by poly (I:C), and the other genes in the four immune tissues were inducible by poly (I:C) in this study. Conclusions In this study, the CDS of SpTLR2 and SpTLR4 were successfully cloned and characterized. Phylogenetic analysis showed that SpTLR2 and SpTLR4 proteins were most closely related to TLR2 and TLR4 from golden mahseer. Multiple sequence alignment showed that SpTLR2 and SpTLR4 are moderately conserved. These two proteins were expressed in all tissues examined; SpTLR2 was the most abundantly expressed in the spleen and SpTLR4 in the liver. The poly (I:C) challenge affected the expression of several TLR-related genes in an organ-specific manner, suggesting their involvement in antiviral immunity or pathological processes. Overall, our findings demonstrate that SpTLR2 and SpTLR4 are likely involved in the immune response. These findings contribute to a better understanding of the mechanisms of immunity in lower vertebrates, which may shed light on response mechanisms to infections in economically relevant fish species. Informed Consent Statement: Not applicable. Data Availability Statement: Publicly available datasets were analyzed in this study. The rest of the data presented in this study are available on request from the corresponding author.
v3-fos-license
2022-11-16T16:31:48.549Z
2022-11-06T00:00:00.000
253536026
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://direct.mit.edu/opmi/article-pdf/doi/10.1162/opmi_a_00064/2057921/opmi_a_00064.pdf", "pdf_hash": "86a7a03f777494f0b5bfd76a9420f24bd0a13416", "pdf_src": "MIT", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1240", "s2fieldsofstudy": [ "Biology" ], "sha1": "0b59a1b6afb43b393c8c922ad167378728da6213", "year": 2022 }
pes2o/s2orc
Simplification Is Not Dominant in the Evolution of Chinese Characters Linguistic systems are hypothesised to be shaped by pressures towards communicative efficiency that drive processes of simplification. A longstanding illustration of this idea is the claim that Chinese characters have progressively simplified over time. Here we test this claim by analyzing a dataset with more than half a million images of Chinese characters spanning more than 3,000 years of recorded history. We find no consistent evidence of simplification through time, and contrary to popular belief we find that modern Chinese characters are higher in visual complexity than their earliest known counterparts. One plausible explanation for our findings is that simplicity trades off with distinctiveness, and that characters have become less simple because of pressures towards distinctiveness. Our findings are therefore compatible with functional accounts of language but highlight the diverse and sometimes counterintuitive ways in which linguistic systems are shaped by pressures for communicative efficiency. INTRODUCTION A common expectation about the world's writing systems is that their symbols evolve to become simpler over time. This idea is compatible with a broader literature on signed, spoken and written language that emphasizes ways in which linguistic systems are shaped by the need to support efficient communication (Gibson et al., 2019;Keller, 2005;Kirby et al., 2015;Zipf, 1949). Just as speakers simplify and shorten words in order to communicate with greater efficiency (Kanwal et al., 2017), written symbols undergo comparable transformations that remove superfluous graphical details and reduce visual complexity (Changizi & Shimojo, 2005;Dehaene, 2009;Garrod et al., 2007;Kelly et al., 2021;Pauthier, 1838;Trigger, 2003). As the world's only primary script still in continuous use, Chinese writing is regularly invoked as a compelling illustration of graphic simplification over historical time. Classical and modern Chinese philologists have long commented on processes of change and simplification in the Chinese script (for historical overviews see Behr (2005), Bottéro (1998), and Erlman (1990)), and European scholars continued this intellectual trend (Pauthier, 1838;Warburton, 1741). For example, H. J. Klaproth suggested that through regular tracing the once-iconic Chinese characters became more "abbreviated and cursive" as the features of their images began to "blur and disappear" resulting in a kind of shorthand (Klaproth, Although the idea that characters typically simplify is intuitive, it should not be taken for granted. Linguistic systems are shaped by multiple pressures-some of these forces reinforce each other but others act in opposite directions (Haiman, 2010). If we consider a single character in isolation, reducing the complexity of the character may make it easier to read and write. Yet if we consider the entire inventory of characters, reducing visual complexity may make the characters harder to distinguish from each other (Pelli et al., 2006;Wiley & Rapp, 2019). Even a randomly-generated inventory of symbols may be distinctive enough if the inventory is small, but distinctiveness is harder for large symbol inventories to achieve, and may have become especially relevant to written Chinese as the size of the character inventory has grown over time (Chang et al., 2016;. If simplicity and distinctiveness trade off against each other, then simplification over time no longer appears to be inevitable, and two additional hypotheses must be considered. If the relative weights of these factors shift in favour of distinctiveness over time, then it is possible that character complexity will increase, as has occurred for the examples on the right of Figure 1. Alternatively, if OPEN MIND: Discoveries in Cognitive Science simplicity and distinctiveness remain in equilibrium, it is possible that character complexity will remain steady over time. Some support for this final hypothesis is provided by the recent work of Miton and Morin (2021) who analyzed a phylogeny including more than a hundred scripts and report that descendant scripts show no general tendency to either increase or decrease in complexity relative to ancestor scripts. To adjudicate between these hypotheses, we examine the evolutionary trends of the Chinese script over the course of its recorded history. We view Chinese writing as a large natural experiment in which countless readers and writers over thousands of years have shaped its graphical landscape in ways that reflect the fundamental pressures acting upon the evolution of writing systems more broadly. By leveraging computational methods at scale, we attempt to clarify how and why the Chinese writing system has changed in visual complexity over time. METHOD & RESULTS We began by collecting 38,066 images of historical Chinese characters from a popular Chinese etymology website called hanziyuan.net. Hanziyuan includes forms from three key historical scripts: oracle bone script (甲骨文), bronze script (金文), and small seal script (小篆書). The oldest surviving examples of the Chinese script are oracle bone inscriptions from the Shang dynasty (ca. 1600-1046 BCE). These texts were incised on ox scapulae and turtle plastrons and used in divination ceremonies. Bronze script appears on objects cast in bronze including vessels, bells and tripods, and was often produced by writing on the soft clay moulds used to cast these objects. Early bronze inscriptions date from the Shang dynasty and are coeval with oracle bone inscriptions, but bronze script is most characteristic of the Western Zhou (1046-771 BCE) and periods. After these periods a variety of scripts were used by the independent states of the Warring States period (476-221 BCE). The country was subsequently unified under the Qin dynasty (221-206 BCE), and small seal script was the official standard script during this dynasty. To complete our dataset we added handwritten modern characters from two scripts: traditional script (正體字) (Chen, 2020), which is used today in Taiwan, Hong Kong, and Macau and by parts of the Chinese diaspora, and simplified script (简化字) (Liu et al., 2011), which replaced the traditional script in mainland China. As described later, we also analyzed printed modern characters, but chose to focus on handwritten modern forms for maximum comparability with oracle bone forms. Although our dataset includes more than half a million images of Chinese characters it provides an incomplete picture of the great diversity of historical Chinese scripts. By necessity we are constrained to work with sign forms that have survived in the historical record and can be dated to a period; it is not possible to probe the scope of written traditions that left no trace or are yet to be uncovered. Even among surviving materials, entire scripts are missing from our data, including scripts used during the Warring States period (Park, 2016) and the clerical script widely used during the Han dynasty (206 BCE-220 CE). Further, within any period there may be substantial differences between the standard form of a character and a range of informal variants (including cursive forms), and our dataset focuses on standard forms. Despite these limitations, our data seem sufficiently rich to determine whether or not the evolution of Chinese characters shows a general tendency towards simplification, as we explain in more detail below. The full set of images includes representatives of 3,889 distinct characters. This set includes all characters that appear either on hanziyuan.net or in one of our modern handwritten data sets, and also in the Chinese Lexical Database (CLD) (Sun et al., 2018). We focus on characters from the CLD because our analyses draw on information including character frequency that is included in this database. For each of the 3,889 characters in our dataset, we have up to 291 images of its variants from each script. When a character has multiple variants within a single script, the complexity of a character is defined as the median complexity across all of these variants. Images from all sources underwent the same preprocessing steps to control for size and stroke thickness, and full details can be found in the supplementary material. 1 Following previous studies (Garrod et al., 2007;Kelly et al., 2021), we define the visual complexity C of an image as its perimetric complexity (Arnoult & Attneave, 1956;Pelli et al., 2006): where P is the sum of the interior and exterior perimeters of the image, and A is its area. Perimetric complexity has been shown to predict several aspects of human perception including the efficiency, accuracy and speed of recognizing letters and characters from multiple scripts including modern Chinese (Chang et al., 2016;Pelli et al., 2006;Wang et al., 2014;Wiley et al., 2016;Zhang et al., 2007). Other complexity measures are possible, including the number of black pixels in an image, the length of an image's description in a standardized representation language, and measures related to writing such as the number of strokes in a character and the approximate time taken to write a character. Previous work suggests that alternative measures like these are highly correlated both with perimetric complexity and with each other ( Wang et al., 2014;Zhang et al., 2007), and we report similar results in the supplementary material. The substantial correlations between all of these measures suggest that our conclusions are probably robust to the choice of complexity measure. Changes in Complexity Over Time Figure 2 shows how character complexity has changed across the five scripts in our analysis. Each character has been assigned to one of four streams depending on the script in which it first appears in our dataset: for example, the oracle stream includes all characters for which we have an oracle bone form. Figure 2 suggests that characters tend to increase in complexity up to seal script and subsequently become less complex. To confirm the changes in complexity suggested by Figure 2, we used the brms package (Bürkner, 2017) to run a Bayesian mixed effects regression with script as a predictor of complexity, and included character as a random intercept and a random slope for script. The 95% credibility intervals for the coefficients that capture differences between successive scripts all exclude zero, suggesting that the two increases in complexity up to seal script and the two subsequent decreases are all statistically reliable. Figure 2 also includes results for modern characters printed in two fonts. Complexity scores are substantially higher for printed than for handwritten forms, but regardless of whether we consider printed or handwritten versions of modern characters, we find that traditional and simplified forms are both more complex than their oracle counterparts. Figure 2 reveals two distinct ways in which character complexity has increased through time. First, the oracle and bronze streams both increase in complexity up through seal script, suggesting that individual characters often increase in complexity. Second, the characters in each successive stream tend to be more complex than characters in previous streams, suggesting that there is a tendency for new characters added to the inventory to be more complex than existing members. Because our dataset is missing many forms, and because forms from earlier scripts are more likely to be missing, this finding must be interpreted with caution. For example, thousands of known oracle bone forms are missing from our dataset because they have never been deciphered. Although our results reveal a net increase in complexity over 3000 years, we do find evidence of simplification from the seal script on. Scholars often suggest that the transition between seal script and modern characters involved a process of simplification (Schindelin, 2019), and our results for handwritten (but not printed) traditional characters support this view. The simplified script was specifically designed to reduce the visual complexity of written Chinese (Pan et al., 2015), and as expected our results for both handwritten and printed characters confirm that simplified forms are less complex than traditional forms. Our results therefore provide partial support for the standard view that writing systems are shaped by forces that tend towards simplification, but challenge the idea that these forces have been dominant over the history of the Chinese script. Although Figure 2 suggests that modern forms tend to be more complex than oracle forms, it is possible that some kinds of characters defy this overall trend. Characters with iconic origins, characters with small numbers of components, and high frequency characters all seem like especially good candidates for simplification. We now consider each of these subclasses in turn, and in all three cases we report consistent evidence for increases in complexity over time. Informal discussions of the simplification of Chinese often refer to examples involving characters like 車 [vehicle] (see Figure 1) and 馬 [horse] that originated from detailed illustrations of animals and other concrete natural elements (Norman, 1988;Qiu, 2000). Because iconic images tend to be complex, it is natural to think that unnecessary detail should be shed over time (Norman (1988), although see Miton and Morin (2019)), and this intuition probably accounts for the widespread assumption that Chinese characters typically simplify. To test this intuition we drew on the character classifications available in the CLD. Characters classified as pictographic originate from iconic forms, and pictologic characters are similar but more symbolic in nature. Pictosynthetic characters are combinations of multiple pictographic characters, and pictophonetic characters are combinations of phonetic and semantic components. The fifth class (other) is a catch-all, and each character is assigned to exactly one class. To Figure 2. Complexity over time of characters grouped according to their first appearance in our dataset. The first stream (red) includes characters for which we have at least one oracle bone form, and the bronze, seal and traditional streams are shown in yellow, green and blue respectively. Grey lines show results for the traditional stream based on characters printed in two fonts. Line thickness is proportional to the number of characters included in each stream, and error bars (which are small and therefore difficult to see) show the standard error of the mean. OPEN MIND: Discoveries in Cognitive Science see the strongest possible differences between oracle bone and modern forms we treat traditional characters as representatives of the modern era, and Figure 3a suggests that characters from all five classes increased in complexity between the oracle bone and traditional scripts. The analysis includes only characters that are present in our dataset for both scripts, and the y-axis (complexification) shows the difference in perimetric complexity between the two scripts. The supplementary material includes analyses which suggest that the increase in complexity for each class is statistically reliable. We therefore conclude that the net increase in complexity between oracle bone and traditional forms summarized by Figure 2 applies to many kinds of characters, including those with iconic origins. Because our finding that pictographic characters have increased in complexity challenges a common view about the evolution of writing systems, we developed a preregistered behavioral experiment to address the concern that this finding may be an artifact of perimetric complexity. 2 The experiment asked 400 participants who were not fluent in Mandarin, Cantonese or Japanese to rate the relative complexity of 155 pairs of forms. The characters used were identical to the 155 pictographic characters assigned to the pictographic group in Figure 3a. In the handwritten condition, the traditional forms were drawn from the same set of handwritten characters analyzed in Figure 3a, and in the printed condition the traditional forms were shown in Hiragino Sans GB. Figure 3c shows that on average traditional forms were rated as more complex than oracle bone forms in both the handwritten and printed conditions. A set of preregistered statistical tests supported this conclusion for handwritten but not printed characters. Full details are available in the supplementary material, and taken overall the results support the conclusion that pictographic characters have traditional forms that are more complex than their oracle bone forms. In addition, the experiment provides some evidence that perimetric complexity is an adequate complexity measure for our purposes. One way for a character to increase in complexity is to acquire new components. Characters with modern forms consisting of a single component only (e.g., 車 [vehicle]) may therefore be especially likely to show evidence of simplification. We used data from the Chinese Characters Decomposition (CCD) project to sort our dataset into characters with different numbers of components ( Wikimedia Commons, 2021). The CCD project is based on simplified characters and provides decompositions that are purely graphical rather than etymological. Even so, these data provide a useful way to distinguish characters with different numbers of components. Figure 3b shows that even characters with a single component have become more complex over time. Increases in complexity, however, tend to be greater for characters with multiple components than for single component characters. One possible reason for simplification is that writers sometimes cut corners and simplify when reproducing a character. On this account, the characters written most frequently should be most likely to simplify. This hypothesis is consistent with Zipf's law of brevity, which states that frequently used linguistic units tend to be especially simple, and with a body of related work that has explored how language is shaped by efficiency considerations (Bentz & Ferrer-i Cancho, 2016;Zipf, 1949). We tested this hypothesis by using character frequencies from the CLD and assuming for simplicity that CLD frequencies (which are based on modern data) are also representative of frequencies for earlier scripts. Some characters are components of other characters, and we define the adjusted frequency of a character as the number of times it is written per million characters, either in isolation or as part of another character. We sorted our characters into six frequency bins using a logarithmic scale of base ten, and compared average character complexity in each bin both within and across scripts. Figure 4 shows that characters within each frequency bin show parallel changes in complexity over time. This result indicates that even the most frequently used characters do not simplify over time. Although characters in all frequency bins have higher traditional complexities than oracle bone complexities, within each script frequently used characters tend to be simpler. This result is consistent with Zipf's law of brevity, and suggests that Chinese characters are indeed shaped by efficiency considerations. Figure 4 also reveals that changes in complexity over time are modulated by frequency, and that frequently used characters tend to show smaller increases in complexity up to seal script and smaller decreases in complexity thereafter. High frequency, however, is evidently not sufficient to produce simplification overall. A statistical analysis supporting all of these conclusions is presented in the supplementary material. Our analyses so far provide consistent evidence that modern characters are more complex than oracle forms, and suggest that pictographic characters, single-component characters and high frequency characters are not exceptions to this general trend. These results came as a surprise to us, and led us to consider possible reasons why complexity may have increased over time. The next section introduces two potential explanations, both of which invoke evolutionary pressures in favor of distinctiveness. Both explanations seem plausible to us, but we acknowledge that we do not have strong evidence for either one. Complexity and Distinctiveness The expectation that characters tend to simplify can be informally motivated by the idea that writing systems increase in communicative efficiency over time. Simplicity, however, is just one relevant dimension, and communicative efficiency is best conceptualized as a nearoptimal trade-off between several competing dimensions (Kemp et al., 2018). We focus here on the trade-off between simplicity and distinctiveness, or the ease with which characters can be distinguished from each other ( Wiley & Rapp, 2019). If simplicity and distinctiveness are inversely related-that is, if more complex characters are also more distinctive-then pressures toward distinctiveness could help to explain why complexity has increased over time. The character inventory could remain communicatively efficient at all stages of this process as long as simplicity is always maximized for the current level of distinctiveness. Measuring distinctiveness is challenging, and to our knowledge there is no standard approach in the literature. We therefore developed our own distinctiveness measure using a convolutional neural network (CNN) trained to classify handwritten Chinese characters. The results emerging from this measure are suggestive, but as we discuss later the measure is subject to some important limitations. We therefore view our distinctiveness analyses as a tentative initial exploration that should be revisited and extended in future as improved distinctiveness measures become available. Our measure is motivated in part by previous work suggesting that the internal representations generated by CNN classifiers provide a good account of human similarity judgments (Peterson et al., 2018). In our case, the CNN is a GoogLeNet architecture trained on a large database of simplified characters (Zhong et al., 2015). To make our character images maximally comparable to the images on which the CNN was trained, we included an extra image processing step that increased the stroke width of each character. Passing an image through the network generates an activation vector over each layer, and we took the activation over the final fully connected layer as the representation for each character. Distinctiveness can then be defined as the average Euclidean distance between a character and its closest 20 contemporary neighbours. The neighborhood size of 20 is based on previous work on orthographic similarity that uses the same definition of distinctiveness but different underlying representational spaces (Sun et al., 2018;Yarkoni et al., 2008). In cases where our data include multiple images for a specific character in a specific script, we treat the median complexity image as the definitive variant of the character. We used the distinctiveness measure just introduced to explore whether complexity and distinctiveness trade off against each other. Figure 5a shows that character complexity and distinctiveness are positively correlated within each script. This result suggests that complexity and distinctiveness trade off at the level of individual characters, and that individual characters may need to become more complex in order to become more distinctive. To explore whether a similar trade-off applies at the level of entire systems of characters, we repeatedly sampled miniature systems of 50 characters and asked whether systems with higher average distinctiveness also tend to be higher in average complexity. We generated samples separately for each script using two distinct sampling strategies. Figure 5b is based on sorting the characters in each script into 6 complexity bins (low complexity to high complexity), and then generating 200 random samples within each bin. Figure 5c used a similar approach except that the bins were based on distinctiveness rather than complexity. In both cases, average complexity and average distinctiveness were correlated, suggesting that complexity and distinctiveness trade off at the system level. Next we considered how distinctiveness has changed over time, and Figure 6a shows a steady increase in distinctiveness up to the traditional script. To control for inventory size, The oracle stream includes all characters that first appear in the oracle bone script and that are attested in all subsequent scripts, and the bronze, seal and traditional streams are defined analogously. Direct comparisons of distinctiveness between streams (e.g. bronze vs seal) are not possible because the streams have different numbers of characters, and the key question is how distinctiveness changes over time within each stream. For all streams, Figure 6b reveals that distinctiveness increases up to the traditional script and then falls. For comparison with the results for handwritten characters, Figure 6b includes versions of the traditional stream for characters printed in two fonts. Because handwritten characters are produced by writers who desire to minimize writing time, we expected distinctiveness scores to be lower for handwritten than for printed characters. This finding emerges for simplified characters, but for traditional characters distinctiveness is lower for characters printed in Hiragino Sans GB than for handwritten characters. A second unexpected result is that across both traditional and simplified scripts, distinctiveness is substantially lower for Hiragino than for SimSun. A possible explanation for both results is that our distinctiveness measure is overly sensitive to stylistic differences (e.g. whether or not a font includes serifs) that are of limited interest for our purposes. Our distinctiveness measure is subject to another important limitation which means that the results in Figure 6 should be taken as suggestive but not conclusive. The neural network that we used was trained on simplified characters, and may be relatively poor at distinguishing between oracle bone forms largely because they are qualitatively different from the simplified forms in the training set. This concern does not affect the finding that distinctiveness and visual complexity appear to trade off within each script ( Figure 5), but does affect our comparisons across scripts (Figure 6). Future research can potentially address this concern by supplementing our distinctiveness results with similar analyses based on a network trained on oracle bone forms. Establishing a causal account of historical change does not seem possible given the data available to us, but we offer two plausible explanations of the finding that complexity and distinctiveness have both increased through time. The first explanation holds that distinctiveness is the driving factor, and that an increase in distinctiveness has caused complexity to increase. Distinctiveness is especially relevant to readers, who must distinguish each character viewed from possible alternatives, and may have become increasingly important as the relative balance between readers and writers has shifted over time. In the modern era, a character that is written, carved or inscribed once can be read by an audience of millions, and it seems plausible that the average audience size for each act of writing has steadily increased over time. The second possible explanation holds that neither complexity nor distinctiveness is the driving factor, but that both have been influenced by a third factor-the dramatic expansion of the Chinese character inventory over time. When new characters are added, distinctiveness must remain above some threshold in order for the script to remain usable. If most of the simple forms are already taken, new characters will have to be relatively complex in order to maintain distinctiveness above this threshold, which means that average complexity will increase over time. In principle, it may be possible to add new characters while holding average distinctiveness constant, but this possibility may be unachievable if new characters must created by reusing components of existing characters. As a result, it is possible that increasing inventory size inevitably requires increases in both complexity and distinctiveness. Our two possible explanations are not mutually exclusive, and it is possible that the balance between complexity and distinctiveness has shifted over time and that the expansion of OPEN MIND: Discoveries in Cognitive Science the character inventory has driven increases in both complexity and distinctiveness. These two possibilities, however, are conceptually distinct, and the first could apply even if the size of the character inventory were held constant. Although both explanations seem plausible to us, the second seems likely to carry more weight because the expansion of the character inventory is such a striking development in the history of Chinese characters. To understand this development in more detail, future studies could simulate different hypothetical strategies for generating novel characters over time, and could directly test the idea that the only feasible strategies lead to increases in average complexity and average distinctiveness. DISCUSSION Writing systems are often thought to simplify over time, but we found that the visual complexity of modern Chinese characters has increased relative to oracle bone forms. This increase in complexity has occurred at the level of individual characters and at the level of the entire inventory, whose average complexity has been increased by the addition of relatively complex characters. The iconicity of early Chinese characters has not stood in the way of this process, with early iconic forms complexifying over time even as they become more abstract. High frequency, likewise, is not enough to protect against increases in complexity. When we look beyond the popular examples brought forward by proponents of simplification, we see that for every intuitive example of simplification (e.g. left side of Figure 1), there are many other examples of complexification occurring instead (right side of Figure 1). A plausible explanation for our results is that writing systems, just like languages, are subject to multiple competing pressures, including a pressure for distinctiveness that trades off against a pressure for visual simplicity. Future work can aim to measure and evaluate additional factors that influence the ease of reading, writing and learning characters. For example, the compositionality of the system (Myers, 2019), or the extent to which characters are composed out of standardized recurring elements will affect the ease with which characters can be learned. Ease of learning probably trades off against visual simplicity: for example, Hannas (1988, p. 210) points out that 鑫 is high in visual complexity but relatively easy to learn because it repeats a single element three times. The compositionality of a system can potentially be formulated using a setwise complexity measure that assesses the complexity of entire systems of characters. One such measure, for example, defines the complexity of a set as the length of the minimal description of all characters in the set. If the characters in the set are all built from a small library of components, then the minimal description would involve describing each component then specifying how the components are combined to form characters. Although our results suggest that the average visual complexity of individual characters in the Oracle stream has increased over time, the setwise complexity of these characters may well have decreased as the writing system has become more compositional. Testing this idea would probably require a sophisticated computational approach that draws on techniques from the literature on computer vision in order to capture elements that recur across sets of handwritten characters. Reconciliation With Prior Work At first sight, our finding that Traditional forms are more complex than Oracle forms seems directly incompatible with earlier claims about the evolution of writing and of the Chinese script in particular. Our disagreement with prior research, however, is perhaps less fundamental than it seems. To our knowledge, previous studies have not directly measured changes in the visual complexity of Chinese characters over time, which means that our findings do not conflict with any specific empirical results from the literature. The conflict is rather with general claims about how the Chinese script has developed over time. In the literature on written Chinese, "simplification" has been used in a range of different ways. In discussions of the shapes of individual characters, simplification is broadly used to refer to a bundle of changes that includes a progression away from pictorial forms and towards more abstract symbols in addition to changes in visual complexity. Simplification has also been used to refer to increases in consistency across tokens of a single character, and to increases in stylistic consistency across an entire script, including the development of a repertoire of standard strokes (Qiu, 2000). Because simplification has been used to label so many different kinds of changes, many previous ideas about simplification remain intact despite our findings about changes in visual complexity over time. If we focus on visual complexity in particular, experimental work (Garrod et al., 2007) and a prior analysis of the Vai script (Kelly et al., 2021) both suggest that written symbols tend to be relatively complex when first created but become simpler as they are repeatedly used. These results led us to anticipate similar changes in written Chinese, but in retrospect we see two important differences between our work and the studies of both Garrod et al. (2007) and Kelly et al. (2021). First, both previous studies trace the evolution of symbols from their moment of birth onwards, but the earliest forms in our analysis are drawn from a time at which Chinese characters had already been in use for hundreds of years. The historical record does not reveal what the very first Chinese characters looked like, and it is possible that the earliest stages in the development of the script were characterized by decreases in visual complexity. Second, both previous studies considered symbol inventories that were relatively stable in size over time, but we considered a system that has significantly increased in size. It is possible that simplification is typical when the size of an inventory remains constant, but that as an inventory increases in size, complexification becomes necessary in order to hold distinctiveness at an acceptable level. Limitations and Caveats Although our work highlights the idea that the graphic dimension of writing is shaped by general functional principles, our results are coloured by the historical and material context in which written Chinese developed. Our dataset covers a period of approximately 3,000 years; in this time, the characters that we study have transitioned from brushed and etched signs to digital fonts typed onto computer screens. There is no doubt that the epigraphic technology available in a given period has conditioned the degree of complexity that the script could tolerate. Just as the change from a reed stylus to a wedge-tipped stylus in mid-third millennium Mesopotamia introduced a more compact and consistent style of cuneiform, in China a transition from bone carving to the use of soft-clay impressions, for example, would have altered the parameters of graphic possibility (Demattè, 2010;Škrabal, 2019). The social functions of different scripts are also likely to influence their relative complexities. For example, scripts used informally may tend to be simpler than scripts used for official documents, and ornamental scripts used for display purposes may be especially complex. Historical precedent and contact with other graphic traditions are also factors that bear consideration in any examination of script change. However, few palaeographers subscribe to a strictly deterministic view of script evolution whether in terms of scribal media, social function or contact. For example, technological shifts in the production of the Vai script, from reed pens to modern pencils and digital fonts, do not account for any substantial changes in visual complexity, nor indeed did standardisation campaigns or shifts in genre (Kelly et al., 2021). The reality that the Egyptian hieroglyphic script was transformed into the much simpler hieratic script is in no way negated by the continued and concurrent use of the hieroglyphic script for monumental display. In short, we maintain that the material and ideological circumstances of a writing system are informative but do not overwhelm the dynamics of change brought about by actual use, including reading, writing and inter-generational transmission. The most recent phase in the history of written Chinese concerns the simplified script reform of 1956, when China's Ministry of Education replaced a core set of characters with simplified versions. This politically-motivated reform could hardly be characterised as a subtle invisible-hand process, yet it is still part of the bigger story of script change and demands its own explanation. After all, deliberate acts of simplification have taken place several times in the history of the script (for an early example see Semedo (1655, p. 43). Two nation-wide campaigns, in 1935 and 1977, failed abysmally and even the 1956 reform had its limitations. Despite affecting only about half of the inventory, the over-simplification of certain characters nonetheless introduced unintended reading difficulties (Pan et al., 2015) suggesting that the pull towards distinctiveness is formidable even in the face of heavy reform. New Directions Our results suggest that simplification is not the dominant trend in the evolution of Chinese characters, but additional work is needed to determine the extent to which complexification has occurred. One pressing need is for a dataset that includes more scripts than the five analyzed here. Some of these scripts will correspond to subdivisions of the oracle-bone and bronze scripts considered here. For example, oracle-bone sources have been organized into five periods (Dong, 1964), and measuring complexity changes across these periods may be revealing. Other scripts could be added to the current dataset, including scripts written on stone, bamboo, silk, and wood during the Zhou dynasty (1046-256 BCE), clerical script (隸書), and a variety of cursive and semi-cursive scripts known from the Zhou dynasty on. Individuating and enumerating historical scripts is unavoidably subjective, but an upper bound on the number that might be considered is given by Yu Yuanwei (6th century CE), who listed around 100 script styles, many of which were ornamental and never in everyday use (Tseng, 1993). Wherever possible, each form in an extended database should be annotated with the estimated date of production, means of production (e.g. carved in stone) and the genre of the text from which the form was collected. Compiling such a database would require a major effort from a large team of researchers, but would allow analyses of historical change that attempt to control for genre and means of production. For example, the "Chinese Calligraphy and Inscription Collection" (United Digital Publications, 2005) offers an opportunity to study changes in calligraphic styles used by poets between ca. 2205 BCE and 1636 CE, while controlling for genre and medium. Regardless of how carefully an extended database is compiled, large gaps are inevitable. For example, oracle bone texts belong to a relatively narrow genre, and there is little evidence about how characters were written at the time outside the context of divination. Despite these gaps in the historical record, the available data seem sufficient to allow robust tests of Qiu's claim that instances of complexification "pale in significance when compared with the importance of simplification" (Qiu, 2000, p. 48). As suggested earlier, accounts of the evolution of Chinese often include several distinct changes under the broad heading of simplification (Qiu, 2000). Our work suggests an alternative approach that attempts to isolate different factors (e.g. visual simplicity, distinctiveness and compositionality) that influence the ease of reading, writing, and learning characters, and to explore ways in which these factors either support or trade off against each other. We made a start in this direction by working with formal measures of simplicity and distinctiveness, but future work can aim to extend and improve these measures, and to measure and evaluate the role of additional factors. Characterizing the factors in question is somewhat challenging, but exploring trade-offs between these factors may turn out to be even more challenging. For example, future work should aim to test the idea that attested scripts achieve near-optimal tradeoffs between simplicity and distinctiveness. Addressing this question will probably require comparing attested tradeoffs with the tradeoffs achieved by a large space of hypothetical scripts, and characterizing these hypothetical scripts is likely to require a sophisticated computational approach. Conclusion Historical changes in written Chinese have undoubtedly been shaped by multiple factors, but our findings nevertheless suggest that modern characters are more complex than their oracle bone equivalents. This result can be explained in part by a trade-off between simplicity and distinctiveness, and written Chinese therefore provides yet another example of how linguistic systems are shaped by competing functional constraints. Although our work challenges the specific claim that writing systems naturally become simpler over time, it is entirely compatible with the broader view that writing systems are fundamentally shaped by the need for efficient communication.
v3-fos-license
2020-03-11T13:10:28.781Z
2020-03-01T00:00:00.000
212653178
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4360/12/3/579/pdf", "pdf_hash": "3ebb690820e64672e54f648d39d1ba02e9132af7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1242", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "ad4b25a141df7fced7e84b541b62f0a59f2cc1c9", "year": 2020 }
pes2o/s2orc
Synthesis of Polyaniline/Scarlet 3R as a Conductive Polymer Polyaniline (PANI) was prepared in the presence of the acidic dye scarlet 3R. Color tuning was performed on PANI through doping–dedoping processes and by changing the solvent used during the optical absorption spectroscopic measurements. The chemical structure of the resulting polymer–dye composite was analyzed using infrared absorption spectroscopy, and it showed the occurrence of secondary doping in m-cresol. The shape of the UV–Vis optical absorption spectra for the composite solution is dependent on the types of organic solvents used during the analysis, which was influenced by the conformation of PANI and the ionic interactions between PANI and scarlet 3R. Introduction Polyaniline (PANI) is a promising conductive polymer whose unit molecule, aniline, is popularly used as a raw material for dyestuff, pigments, and medicine [1,2]. Composite materials using PANI are based on the conductivity of polymers; therefore, those combined with graphene [3] and magnetic materials [4] have been reported. Physical properties of giant magnetoresistance [5] and negative permittivity [6] for PANI have been studied. PANI that is synthesized via aniline polymerization is stable in air and exhibits moderate electrical conductivity, which is derived from both ionic and electrical conduction in the π-conjugation of the main chain. Its synthesis is basic and convenient when compared with the synthetic route undertaken by other conductive polymers in the sense that there is no need to use inert gas or organic solvent during the preparation process. Synthesis of PANI composites with inorganic materials such as TiO 2 via emulsion polymerization yields a product (PANI) with high thermal stability [7]. PANI synthesis is generally conducted in an aqueous medium with the addition of an oxidizer as a polymerization initiator under acidic conditions. The anticorrosion function of PANI for metals has been further developed for applications [8]. Conductive polymer color tuning is often performed by adding dye; however, this process is difficult to perform because conductive polymers inherently possess a comparatively deeper, richer color than most dyestuff on the market today owing to their extensive π-electron system. Despite this disadvantage, conductive polymer-dye composites are still extremely valuable for practical applications in daily life. In this research, the synthesis of PANI, in the presence of an acidic dye, namely, scarlet 3R, was performed via doping (oxidation) and dedoping (reduction) processes. Changes in the electronic state and color of PANI upon application of the dye are discussed as the acidic dye used in this study partly functions as a surfactant, and reduction using ammonia/water further serves to remove the dye from the polymer. PANI-3R(ES) The preparation of polyaniline in the presence of scarlet 3R was performed with the aid of ammonium persulfate (APS) as an oxidizer (Scheme 1). First, scarlet 3R (2 g) was added to 100 mL of water at ca. 0 °C, followed by the dissolution of 2 g of aniline. Sulfuric acid (2 g) was then added, and this resulted in a rapid decrease in the pH of the solution as shown in Figure 1a. APS was subsequently added. As shown in Figure 1b (magnification of Figure 1a), 4-step pH changes of the reaction mixture were observed. After approximately 20 h, the reaction mixture was filtered, the resultant polymer residue was dried under reduced pressure, and it yielded 1.32 g of the desired product. The resulting polymer-dye in its prepared form is abbreviated as PANI-3R(ES), where ES denotes emeraldine salt. In this polymerization, scarlet 3R needs to be added to aniline in the water to form an aniline/dye complex prior to the addition of sulfuric acid in the synthesis. Preparation of PANI with the normal method was performed for comparison. The quantity of the chemicals for the synthesis is the same except for use of no scarlet 3R (Y = 0.802 g). The PANI prepared with the normal method is abbreviated as PANInorm. PANI-3R(EB) An emeraldine base form of PANI-3R was prepared by treatment with an ammonia/water solution. PANI-3R(ES) (10 mg) was dissolved in 0.1 M of the ammonia/water solution and was stirred for 1 h. The resulting polymer slurry was filtered, and the residue was dried under reduced pressure to give the reduced form of the polymer, which is abbreviated as PANI-3R(EB) (Scheme 2). Here, EB denotes emeraldine base. . + n NH Scheme 2. Synthesis of polyaniline-scarlet 3R-emeraldine base (reduced form). Chemicals Scarlet 3R was purchased from Takiguchi Shoten Co. (Tokyo, Japan). Aniline and APS were obtained from YONEYAMA KAGAKU KOGYO KAISHA, LTD. (Osaka, Japan) and used as received. PANI-3R(EB) An emeraldine base form of PANI-3R was prepared by treatment with an ammonia/water solution. PANI-3R(ES) (10 mg) was dissolved in 0.1 M of the ammonia/water solution and was stirred for 1 h. The resulting polymer slurry was filtered, and the residue was dried under reduced pressure to give the reduced form of the polymer, which is abbreviated as PANI-3R(EB) (Scheme 2). Here, EB denotes emeraldine base. PANI-3R(EB) An emeraldine base form of PANI-3R was prepared by treatment with an ammonia/water solution. PANI-3R(ES) (10 mg) was dissolved in 0.1 M of the ammonia/water solution and was stirred for 1 h. The resulting polymer slurry was filtered, and the residue was dried under reduced pressure to give the reduced form of the polymer, which is abbreviated as PANI-3R(EB) (Scheme 2). Here, EB denotes emeraldine base. . + n NH Scheme 2. Synthesis of polyaniline-scarlet 3R-emeraldine base (reduced form). Chemicals Scarlet 3R was purchased from Takiguchi Shoten Co. (Tokyo, Japan). Aniline and APS were obtained from YONEYAMA KAGAKU KOGYO KAISHA, LTD. (Osaka, Japan) and used as received. Chemicals Scarlet 3R was purchased from Takiguchi Shoten Co. (Tokyo, Japan). Aniline and APS were obtained from YONEYAMA KAGAKU KOGYO KAISHA, LTD. (Osaka, Japan) and used as received. Infrared (IR) Absorption Measurement Each of the samples was measured using the KBr pellet method. The powdered sample (the amount is just enough to cover the tip of a spatula) was mixed with KBr. The pellet was then prepared using a hand press. The mixture of the sample powder and KBr was pressed to form a thin and transparent pellet. Thermogravimetric (TG) Analysis TG analysis was performed on PANI samples. The samples were set in a platinum pan and heated to 600 • C at a rate of 10 • C/min under an argon atmosphere with an Ar-gas flow rate of 200 mL/min. X-ray Diffraction (XRD) Spectroscopy The powdered sample of PANI was analyzed at room temperature. CuKα (l = 1.5428 Å). The XRD signals were directly interpreted from the 2θ value. Instrumentation Infrared (IR) absorption spectra were obtained using an FT/IR-4600 spectrometer (Jasco, Tokyo, Japan) by the KBr method. UV-Vis absorption spectra were measured using a V-630 UV-Vis optical absorption spectrometer (Jasco, Tokyo, Japan). Electron spin resonance (ESR) measurement of the solid sample packed into a 5-mm quartz tube was performed using a JEOL JES TE-200 spectrometer in X-band (9.2-9.9 GHz) (JEOL, Tokyo, Japan). The measurement of electrical conductivity was performed using a Lowrester-GP and MCP-TP06P probe by the four-probe method (Mitsubishi, Tokyo, Japan). Scanning electron microscopy (SEM) observations were performed with JSM-7000F (JEOL, Akishima, Japan). Thermogravimetric analysis (TGA) was performed with an EXSTAR7000 (Seiko Instruments Inc., Chiba, Japan). IR absorption spectra were obtained with a JASCO FT-IR 550 spectrometer (Hachioji, Japan). XRD of the samples was measured with PANalytical X'Pert X-ray diffractometers (Almelo, The Netherlands). FTIR Fourier-transform infrared (FTIR) spectroscopy was conducted with the KBr method as part of the chemical and structural characterization procedure ( Figure 2a). As part of the reduction process, treatment of the polymer with ammonia yielded the half-doped state in which PANI-3R(EB) was partly doped with the residual ion. The result of the analysis revealed that PANI comprised a sequence of quinonoid (Q) and benzenoid (B) structures along the main chain ( Figure 3). mL/min. X-ray Diffraction (XRD) Spectroscopy The powdered sample of PANI was analyzed at room temperature. CuKα (l = 1.5428 Å). The XRD signals were directly interpreted from the 2θ value. Instrumentation Infrared (IR) absorption spectra were obtained using an FT/IR-4600 spectrometer (Jasco, Tokyo, Japan) by the KBr method. UV-Vis absorption spectra were measured using a V-630 UV-Vis optical absorption spectrometer (Jasco, Tokyo, Japan). Electron spin resonance (ESR) measurement of the solid sample packed into a 5-mm quartz tube was performed using a JEOL JES TE-200 spectrometer in X-band (9.2-9.9 GHz) (JEOL, Tokyo, Japan). The measurement of electrical conductivity was performed using a Lowrester-GP and MCP-TP06P probe by the four-probe method (Mitsubishi, Tokyo, Japan). Scanning electron microscopy (SEM) observations were performed with JSM-7000F (JEOL, Akishima, Japan). Thermogravimetric analysis (TGA) was performed with an EXSTAR7000 (Seiko Instruments Inc., Chiba, Japan). IR absorption spectra were obtained with a JASCO FT-IR 550 spectrometer (Hachioji, Japan). XRD of the samples was measured with PANalytical X'Pert X-ray diffractometers (Almelo, The Netherlands). FTIR Fourier-transform infrared (FTIR) spectroscopy was conducted with the KBr method as part of the chemical and structural characterization procedure ( Figure 2a). As part of the reduction process, treatment of the polymer with ammonia yielded the half-doped state in which PANI-3R(EB) was partly doped with the residual ion. The result of the analysis revealed that PANI comprised a sequence of quinonoid (Q) and benzenoid (B) structures along the main chain ( Figure 3). The assignment of the functional groups and elements in the resultants was performed following the reported literature [9]. An absorption band due to NH2 stretching was observed at 3246 cm −1 . Both PANI-3R(ES) and PANI-3R(EB) displayed N=Q=N and N-B-N stretching vibrations. PANI-3R(EB) had no absorption bands due to B-Q-B stretching or C-N stretching in BQB, QBB, and BBQ. It was proposed that the absorption band at around 1140 cm −1 is due to B-N + H=Q and B-N + H-B in PANI-3R(EB), which indicates that the partial doping of the polymer was due to absorption. Here, amino cations (N + ) appeared in the doped state due to the removal of one electron from the lone pair of electrons on PANI's nitrogen atom. Other absorption bands observed in the composite product are summarized in Table 1. An absorption band at 1041 cm −1 for polymers and scarlet 3R is observable. PANI-3Rs and PANI prepared by the normal method show almost the same absorptions in the IR. The present IR analysis was unable to detect absorptions due to the scarlet 3R fraction in the component of the polymer. However, a weak absorption band at 1375 cm −1 was observed as a result The assignment of the functional groups and elements in the resultants was performed following the reported literature [9]. An absorption band due to NH 2 stretching was observed at 3246 cm −1 . Both PANI-3R(ES) and PANI-3R(EB) displayed N=Q=N and N-B-N stretching vibrations. PANI-3R(EB) had no absorption bands due to B-Q-B stretching or C-N stretching in BQB, QBB, and BBQ. It was proposed that the absorption band at around 1140 cm −1 is due to B-N + H=Q and B-N + H-B in PANI-3R(EB), which indicates that the partial doping of the polymer was due to absorption. Here, amino cations (N + ) appeared in the doped state due to the removal of one electron from the lone pair of electrons on PANI's nitrogen atom. Other absorption bands observed in the composite Polymers 2020, 12, 579 5 of 12 product are summarized in Table 1. An absorption band at 1041 cm −1 for polymers and scarlet 3R is observable. PANI-3Rs and PANI prepared by the normal method show almost the same absorptions in the IR. The present IR analysis was unable to detect absorptions due to the scarlet 3R fraction in the component of the polymer. However, a weak absorption band at 1375 cm −1 was observed as a result of SO 3 absorption of the scarlet 3R fraction in PANI-3R(ES), while PANI-3R(EB) shows no absorption derived from scarlet 3R, as shown in Figure 2b (magnification). Electron spin Resonance ESR analysis in the X-band was conducted for PANI-3R(ES) to confirm the presence of conduction electrons (commonly referred to as polarons). The ESR for PANI-3R(ES) was an asymmetric, Lorentz-type spectrum, which shows the presence of polarons (radical cations) that were delocalized along the main chain; this observation further confirms doping ( Figure 5). The ΔHpp (peak-to-peak line width) value for the polymer composite was relatively narrow (0.574 mT), suggesting the occurrence of charge carrier delocalization along the main chain. The g-value of the polymer composite proves that the charge carriers in this system were, indeed, polarons delocalized along polyaniline's nitrogen-carbon sequence. The electrical conductivity of the pressed pellet form of PANI-3R(ES) as measured by the four-point probe method was 7.0 × 10 −1 S/cm [5]. Electron spin Resonance ESR analysis in the X-band was conducted for PANI-3R(ES) to confirm the presence of conduction electrons (commonly referred to as polarons). The ESR for PANI-3R(ES) was an asymmetric, Lorentz-type spectrum, which shows the presence of polarons (radical cations) that were delocalized along the main chain; this observation further confirms doping ( Figure 5). The ∆H pp (peak-to-peak line width) value for the polymer composite was relatively narrow (0.574 mT), suggesting the occurrence of charge carrier delocalization along the main chain. The g-value of the polymer composite proves that the charge carriers in this system were, indeed, polarons delocalized along polyaniline's nitrogen-carbon Polymers 2020, 12, 579 6 of 12 sequence. The electrical conductivity of the pressed pellet form of PANI-3R(ES) as measured by the four-point probe method was 7.0 × 10 −1 S/cm [5]. ESR analysis in the X-band was conducted for PANI-3R(ES) to confirm the presence of conduction electrons (commonly referred to as polarons). The ESR for PANI-3R(ES) was an asymmetric, Lorentz-type spectrum, which shows the presence of polarons (radical cations) that were delocalized along the main chain; this observation further confirms doping ( Figure 5). The ΔHpp (peak-to-peak line width) value for the polymer composite was relatively narrow (0.574 mT), suggesting the occurrence of charge carrier delocalization along the main chain. The g-value of the polymer composite proves that the charge carriers in this system were, indeed, polarons delocalized along polyaniline's nitrogen-carbon sequence. The electrical conductivity of the pressed pellet form of PANI-3R(ES) as measured by the four-point probe method was 7.0 × 10 −1 S/cm [5]. UV-Vis The polymers are soluble in N-methyl pyrrolidone (NMP), tetrahydrofuran (THF), and m-cresol. Ultraviolet-visible (UV-Vis) optical absorption spectroscopy was also performed on PANI-3R(ES) and PANI-3R(EB) in m-cresol solution, even though scarlet 3R possessed poor solubility in m-cresol. As seen in Figure 6a, PANI-3R(ES) and PANI-3R(EB) have absorption bands at long wavelengths due to the occurrence of secondary doping. The molecular conformation of PANI was changed from a compact to expanded coil as a result of secondary doping. MacDiarmid and Epstein et al. reported that secondary doping in the polymer allowed for the expansion of the effective π-conjugation length, particularly the extension of the absorption band for PANI-3R(ES) toward the red-infrared range [10,11]. The International Commission on Illumination (Commission Internationale de l'Éclairage, CIE) color spectrum identified PANI-3R(ES) as red in color and PANI-3R(EB) as yellow (Figure 6b,c). Figure 7a presents the UV-Vis spectra for PANI-3R(ES) and PANI-3R(EB) in NMP. The absorption bands at 521 nm (shoulder) and 547 nm were due to the absorption of scarlet 3R (Figure 8), indicating that PANI-3R(ES) formed a composite with the dye. Note that scarlet 3R is poorly soluble in m-cresol. On the other hand, NH 4 + treatment of the PANI-3R(EB) composite resulted in the removal of scarlet 3R as shown by the lack of an intense absorption band related to 521 nm, although traces of absorption band were observed. No secondary doping occurred in NMP for any of the polymers. As the international standard model, the CIE color spectrum identified PANI-3R(ES) as having a purple-blue color and PANI-3R(EB) as being blue (Figure 7b,c). Figure 7a presents the UV-Vis spectra for PANI-3R(ES) and PANI-3R(EB) in NMP. The absorption bands at 521 nm (shoulder) and 547 nm were due to the absorption of scarlet 3R ( Figure 8), indicating that PANI-3R(ES) formed a composite with the dye. Note that scarlet 3R is poorly soluble in m-cresol. On the other hand, NH4 + treatment of the PANI-3R(EB) composite resulted in the removal of scarlet 3R as shown by the lack of an intense absorption band related to 521 nm, although traces of absorption band were observed. No secondary doping occurred in NMP for any of the polymers. As the international standard model, the CIE color spectrum identified PANI-3R(ES) as having a purple-blue color and PANI-3R(EB) as being blue (Figure 7b,c). Figure 8a presents the UV-Vis spectra for PANI-3R(ES) and PANI-3R(EB) in tetrahydrofuran (THF). The method used in our study allows for the synthesis of a THF-soluble PANI derivative, although the prepared PANI (i.e., in its doped form) is generally insoluble in organic solvents. PANI-3R(ES) displayed strong absorption bands at long wavelengths, whereas PANI-3R(EB) showed a weak absorption band at longer wavelengths. An absorption band at 585 nm indicates the presence of the PANI-emeraldine base since PANI in THF does not experience secondary doping effects. An absorption band for scarlet 3R was not observed. The optical absorption due to scarlet 3R overlapped Figure 8a presents the UV-Vis spectra for PANI-3R(ES) and PANI-3R(EB) in tetrahydrofuran (THF). The method used in our study allows for the synthesis of a THF-soluble PANI derivative, although the prepared PANI (i.e., in its doped form) is generally insoluble in organic solvents. PANI-3R(ES) displayed strong absorption bands at long wavelengths, whereas PANI-3R(EB) showed a weak absorption band at longer wavelengths. An absorption band at 585 nm indicates the presence of the PANI-emeraldine base since PANI in THF does not experience secondary doping effects. An absorption band for scarlet 3R was not observed. The optical absorption due to scarlet 3R overlapped with the intense absorption of the polymer composites, which was derived from the π-conjugation along the main chain. The CIE color spectrum classified PANI-3R(ES) as red and PANI-3R(EB) as blue (Figure 8b,c). These results showed that the electronic state and various solvent effects served as a means of color tuning the polymer composite. Doping samples (as prepared) are located in the red region of the color scale. Figure 9a shows UV-Vis spectra for scarlet 3R in THF and NMP solutions. Generally, the doped form of PANI (as prepared) has low solubility in organic solvents. Scarlet 3R is poorly soluble in m-cresol. Figure 9b shows soluble fractions of PANI norm prepared with the normal method in THF, NMP, and m-cresol solutions. Absorptions at short wavelength of PANI norm are a result of the π−π* transition of the benzene ring in the monomer repeat unit. Absorptions of PANI norm at around 600 nm are due to the doping band. PANI norm showed no absorption band at approximately 550 nm due to scarlet 3R, which confirmed that PANI-3Rs (PANIs prepared in the presence of scarlet 3R) with this absorption contain a scarlet 3R fraction in the component. Scanning Electron Microscopy Figure 10a-c shows scanning electron microscopy (SEM) images of PANI norm prepared with the general method. PANI norm exhibits a short fiber structure due to molecular aggregation. Figure 11a-c shows SEM images of PANI-3R(ES). The polymer shows no fiber-like structure under the SEM. Globular and partly broken egg structures are observed (Figure 11b). The globular structures are formed during the polymerization process by the interaction between the monomer and the scarlet 3R. Scanning Electron Microscopy Figure 10a-c shows scanning electron microscopy (SEM) images of PANInorm prepared with the general method. PANInorm exhibits a short fiber structure due to molecular aggregation. Figure 11a-c shows SEM images of PANI-3R(ES). The polymer shows no fiber-like structure under the SEM. Globular and partly broken egg structures are observed (Figure 11b). The globular structures are formed during the polymerization process by the interaction between the monomer and the scarlet 3R. XRD XRD analysis was performed for PANInorm and PANI-3R(ES), as shown in Figure 12. The diffraction patterns of PANInorm and PANI-3R(ES) agree with previous results [12]. PANInorm shows four peaks at 2θ = 9.4, 15.6, 20.6, and 25.7°, while PANI-3R(ES) shows three peaks at 2θ = 15.9, 20.7, and 25.7° due to a decrease in crystallinity. The signal at 9.4° may be due to the inter-main chain distance. It was reported that PANI prepared in the presence of sodium dodecylbenzene sulfonic acid (SDBSA) as a surfactant shows no signal at 9.4° due to a decrease in crystallinity [13]. Similarly, in this study, a decrease in crystallinity compared to the PANI prepared with no surfactant was observed, which was attributed to the presence of scarlet 3R that acted as a surfactant for the synthesis. XRD XRD analysis was performed for PANInorm and PANI-3R(ES), as shown in Figure 12. The diffraction patterns of PANInorm and PANI-3R(ES) agree with previous results [12]. PANInorm shows four peaks at 2θ = 9.4, 15.6, 20.6, and 25.7°, while PANI-3R(ES) shows three peaks at 2θ = 15.9, 20.7, and 25.7° due to a decrease in crystallinity. The signal at 9.4° may be due to the inter-main chain distance. It was reported that PANI prepared in the presence of sodium dodecylbenzene sulfonic acid (SDBSA) as a surfactant shows no signal at 9.4° due to a decrease in crystallinity [13]. Similarly, in this study, a decrease in crystallinity compared to the PANI prepared with no surfactant was observed, which was attributed to the presence of scarlet 3R that acted as a surfactant for the synthesis. XRD XRD analysis was performed for PANI norm and PANI-3R(ES), as shown in Figure 12. The diffraction patterns of PANI norm and PANI-3R(ES) agree with previous results [12]. PANI norm shows four peaks at 2θ = 9.4, 15.6, 20.6, and 25.7 • , while PANI-3R(ES) shows three peaks at 2θ = 15.9, 20.7, and 25.7 • due to a decrease in crystallinity. The signal at 9.4 • may be due to the inter-main chain distance. It was reported that PANI prepared in the presence of sodium dodecylbenzene sulfonic acid (SDBSA) as a surfactant shows no signal at 9.4 • due to a decrease in crystallinity [13]. Similarly, in this study, a decrease in crystallinity compared to the PANI prepared with no surfactant was observed, which was attributed to the presence of scarlet 3R that acted as a surfactant for the synthesis. Thermogravimetric Analysis The results of the TGA of PANI-3R(ES) and PANInorm are shown in Figure 13. In Region 1, there was an out-gassing of the sample in which the moisture was evaporated below 100 °C. The degradation begins at 250 °C (onset), resulting in the weight loss of PANInorm. Meanwhile, the thermal degradation of PANI-3R(ES) begins at 310 °C (onset), indicating PANI-3R(ES) has a higher thermal stability than that of PANInorm. The thermal decomposition of PANI-3R(ES) is gradual, and the magnitude of the curve increases with temperature. Loss of the dopants (hydrogen sulfate or the dye) and degradation of the polymer occur in Region 2 ( Figure 13). PANInorm shows drastic weight loss at 325 °C, which may be due to a loss of dopant (hydrogen sulfate). In comparison, PANI-3R(ES) exhibits no such drastic loss in the heating process. This may be due to the fact that the PANI being wrapped by the dye increases the thermal stability. Carbonization occurred in Region 3, as shown in Figure 13. The total loss weight of PANInorm is 7.5% when heated to 600 °C, compared to 3.42% for PANI-3R(ES). This result confirms that the polymerization of aniline in the presence of the dye allows for the creation of PANI-dye material with an improvement in thermal stability. Proposed Structure Pandiselvi et al. synthesized a chitosan-PANI/ZnO hybrid for the removal of orange 16 dye and indicated that PANI interacts with dyes [14]. N-ions in the PANI-3R(ES) (doped state, emeraldine salt form) with a positive charge (N + ) interacted with the negatively charged scarlet 3R ions during the polymerization process (Figure 14a). Since it is an acidic dye, scarlet 3R functions as an oxidizer; the Thermogravimetric Analysis The results of the TGA of PANI-3R(ES) and PANI norm are shown in Figure 13. In Region 1, there was an out-gassing of the sample in which the moisture was evaporated below 100 • C. The degradation begins at 250 • C (onset), resulting in the weight loss of PANI norm . Meanwhile, the thermal degradation of PANI-3R(ES) begins at 310 • C (onset), indicating PANI-3R(ES) has a higher thermal stability than that of PANI norm . The thermal decomposition of PANI-3R(ES) is gradual, and the magnitude of the curve increases with temperature. Loss of the dopants (hydrogen sulfate or the dye) and degradation of the polymer occur in Region 2 ( Figure 13). PANI norm shows drastic weight loss at 325 • C, which may be due to a loss of dopant (hydrogen sulfate). In comparison, PANI-3R(ES) exhibits no such drastic loss in the heating process. This may be due to the fact that the PANI being wrapped by the dye increases the thermal stability. Carbonization occurred in Region 3, as shown in Figure 13. The total loss weight of PANI norm is 7.5% when heated to 600 • C, compared to 3.42% for PANI-3R(ES). This result confirms that the polymerization of aniline in the presence of the dye allows for the creation of PANI-dye material with an improvement in thermal stability. Thermogravimetric Analysis The results of the TGA of PANI-3R(ES) and PANInorm are shown in Figure 13. In Region 1, there was an out-gassing of the sample in which the moisture was evaporated below 100 °C. The degradation begins at 250 °C (onset), resulting in the weight loss of PANInorm. Meanwhile, the thermal degradation of PANI-3R(ES) begins at 310 °C (onset), indicating PANI-3R(ES) has a higher thermal stability than that of PANInorm. The thermal decomposition of PANI-3R(ES) is gradual, and the magnitude of the curve increases with temperature. Loss of the dopants (hydrogen sulfate or the dye) and degradation of the polymer occur in Region 2 ( Figure 13). PANInorm shows drastic weight loss at 325 °C, which may be due to a loss of dopant (hydrogen sulfate). In comparison, PANI-3R(ES) exhibits no such drastic loss in the heating process. This may be due to the fact that the PANI being wrapped by the dye increases the thermal stability. Carbonization occurred in Region 3, as shown in Figure 13. The total loss weight of PANInorm is 7.5% when heated to 600 °C, compared to 3.42% for PANI-3R(ES). This result confirms that the polymerization of aniline in the presence of the dye allows for the creation of PANI-dye material with an improvement in thermal stability. Proposed Structure Pandiselvi et al. synthesized a chitosan-PANI/ZnO hybrid for the removal of orange 16 dye and indicated that PANI interacts with dyes [14]. N-ions in the PANI-3R(ES) (doped state, emeraldine salt form) with a positive charge (N + ) interacted with the negatively charged scarlet 3R ions during the polymerization process (Figure 14a). Since it is an acidic dye, scarlet 3R functions as an oxidizer; the Proposed Structure Pandiselvi et al. synthesized a chitosan-PANI/ZnO hybrid for the removal of orange 16 dye and indicated that PANI interacts with dyes [14]. N-ions in the PANI-3R(ES) (doped state, emeraldine salt form) with a positive charge (N + ) interacted with the negatively charged scarlet 3R ions during the polymerization process (Figure 14a). Since it is an acidic dye, scarlet 3R functions as an oxidizer; the anionic portion of scarlet 3R bonded with PANI molecules via ionic interactions and functioned as a surfactant during the production of aniline monomer nanoparticles before the addition of APS for polymerization processes [15]. Therefore, the dye created a layer of nanospheres around the PANI that were similar to those seen when SDBSA acts as an anionic soap during PANI synthesis (Figure 14b). shows the interaction of PANI and scarlet 3R via ionic and hydrogen bonds. In addition, PANI partly interacts with sulfuric acid. Polymers 2020, 12, x FOR PEER REVIEW 12 of 13 anionic portion of scarlet 3R bonded with PANI molecules via ionic interactions and functioned as a surfactant during the production of aniline monomer nanoparticles before the addition of APS for polymerization processes [15]. Therefore, the dye created a layer of nanospheres around the PANI that were similar to those seen when SDBSA acts as an anionic soap during PANI synthesis ( Figure 14b). shows the interaction of PANI and scarlet 3R via ionic and hydrogen bonds. In addition, PANI partly interacts with sulfuric acid. Conclusions Colorization of conductive polymers is a very challenging task due to their intense inherent color caused by extensive π-conjugation along the main chain of these polymers. Despite this issue, the colorization of conductive plastics still has important real-world applications. In this study, PANI was prepared in the presence of the acidic dye scarlet 3R, which functions as a surfactant. By countering the ionic nature of PANI, scarlet 3R was able to form a PANI-dye composite via dopingdedoping (oxidation-reduction) processes and effective solvent selection. Herein, the first attempt at tuning the color of a conductive polymer with dyestuff was reported. Conclusions Colorization of conductive polymers is a very challenging task due to their intense inherent color caused by extensive π-conjugation along the main chain of these polymers. Despite this issue, the colorization of conductive plastics still has important real-world applications. In this study, PANI was prepared in the presence of the acidic dye scarlet 3R, which functions as a surfactant. By countering the ionic nature of PANI, scarlet 3R was able to form a PANI-dye composite via doping-dedoping (oxidation-reduction) processes and effective solvent selection. Herein, the first attempt at tuning the color of a conductive polymer with dyestuff was reported. Author Contributions: T.Y. conducted the UV-Vis, ESR, TGA, IR, and Raman spectroscopy measurements, and the SEM observations. H.G. synthesized the polymers. All authors have read and agree to the published version of the manuscript. Funding: This research received no external funding.
v3-fos-license
2018-10-16T10:39:34.125Z
2017-07-14T00:00:00.000
157968440
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/8D0699DE22785BA46D805022C25E0838/S0047279417000538a.pdf/div-class-title-re-doubling-the-crises-of-the-welfare-state-the-impact-of-brexit-on-uk-welfare-politics-div.pdf", "pdf_hash": "f72777df91a9897c3ff5472aa1b24618e0cd2541", "pdf_src": "Cambridge", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1244", "s2fieldsofstudy": [ "Political Science", "Economics" ], "sha1": "d2847c86797b9eb57af9bf420a86d25831163aac", "year": 2017 }
pes2o/s2orc
Re-Doubling the Crises of the Welfare State: The impact of Brexit on UK welfare politics Abstract The double crisis approach distinguishes two kinds of challenge confronting modern welfare states: long-term structural problems and short-term difficulties resulting from policy choices which affect the success with which the long-term issues can be addressed. Structural challenges include two main areas: • globalisation and technological changes demanding that governments direct attention to national competitiveness, and • population ageing, requiring more spending on pensions, and health and social care. Recent policy-related problems include the austerity programme since 2010 which has been particularly directed towards benefits and services for working-age people. Responses to both kinds of challenge have set the stage for Brexit. In The Double Crisis of the Welfare State (2012) and other work (2016), I argue that in the case of the UK, policies since 2010 have done little to address the longterm problem of competitiveness. Instead they have compounded it by curtailing social policy resources, directing what was available to pensions and to some extent health care (with real cuts in social care) financed mainly by harsh cuts on working-age people. These have won votes but limited capacity for serious social investment and harmed those whose interests have already been damaged by social change. These trends have deepened social divisions and are an important factor in the Brexit referendum outcome. This article examines recent developments in relation to the double crisis, considers the social divisions that current short-term policies in the context of structural challenges generate, and looks at their contribution to political mistrust and to the Brexit vote. It also discusses the likely impact of Brexit on state welfare in the UK and on those who feel left behind by globalisation. It goes on to discuss the impact of divisive policies on the continuing trajectory of the double crisis. By contrast, the second aspect of long-term pressures, population ageing, concerns consumption rather than production. Its major effect is to increase the demand for pensions and health and social care, requiring higher levels of funding and reforms to ensure that provision is cost-efficient. Pensions are generally popular and older people are a powerful political force. The coincidence of globalisation/labour market transformation and population ageing poses difficult choices for policy makers, between spending more on younger people to enhance competitiveness or on the growing numbers of older people outside the labour force. Domestic policy has also failed to meet the needs of those disadvantaged by globalisation and industrial change and this reinforced its contribution to support for Brexit. Since 2010 the Conservative-led coalition and the 2015 Conservative government have retrenched most areas of state spending, but have cut back on health care and pensions spending much less sharply than on local government and on the benefits and services most used by people of working age (Hills, 2015;Belfield et al., 2016). They have tackled one long-term problem, population ageing, at the expense of exacerbating the other, maintaining competitiveness in a globalised world, and this has further damaged the losers from globalisation, enhancing support for Brexit. Globalisation, technological change and the labour market More intense international competition and technological change require, among other things, that governments ensure their industries compete effectively. This depends on a number of factors including production costs, labour costs, skill levels and the availability of workers. Welfare state policies (and the incidence of the taxes to fund them) affect all these areas. Wages have traditionally been relatively low in the UK. They rose during the early 2000s, compared to other G7 countries, but then fell substantially after 2007 (OECD, 2016a). Unit labour costs, understood as the average cost of labour per unit of output and taking into account such factors as social insurance costs, are also low. However, low taxes and cheap labour cannot ensure a strong international competitive position; it is what is done with the labour that counts as well as what it costs. Multi-factor productivity (taking into account the costs of labour, energy, goods purchased, capital employed and management costs to produce each item of output) has lagged behind other G7 countries, apart from a brief period between 2001 and 2007 ( Figure 1). One outcome of the UK's poor performance is a chronic balance of payments deficit. In this situation, one way forward is for government to attempt to stimulate private investment and/or to invest itself. The current government has pursued policies to facilitate investment, cutting taxes on business sharply and income tax less sharply and increasing indirect taxation. Income tax fell from 14 to 12 per cent of government revenues between 2010 and 2016, while the more regressive 1985 1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009 VAT rose from 14 to 16 per cent (OECD, 2016b). National Insurance contribution revenues remained at about six per cent, while business taxes fell from 2.5 to two per cent and total revenues increased slightly. The net effect has been to shift the burden of paying for deficit reduction more towards lower-income groups (Johnson, 2015: 5). Farnsworth (2015) estimates the 'corporate welfare state' (the subsidies, capital grants, insurance and advocacy as well as transport, energy and procurement subsidies directed at the private sector) at £183bn a year or roughly ten per cent of GDP, rather more than is gathered in tax on business. The low-tax strategy has encouraged business to locate in the UK and, all things being equal, to expand but has had limited success in improving productivity. Businesses have failed to increase investment for each worker, preferring in general to employ more workers at relatively low wages. The UK has lagged behind all other G7 members except Japan in improvements to capital 'deepening' (the rate of change in capital stock per labour hour) for most of the last three decades. It improved its position to midway between 2006 and 2009 but fell back sharply after 2010 (OECD, 2016a) Proposals for a National Investment Bank which would direct money to hightech projects have been advanced by the Labour opposition (Guardian, 2016) but are unlikely to be taken forward. The Autumn Statement promises £4.7billion over five years to support productivity through R&D, plus small amounts of infrastructure spending (Treasury, 2016). This will raise R&D expenditure in the UK from 1.7 to 1.75 per cent of GDP, much lower than competitors such as Germany (2.9 per cent), France (2.2) or the USA (2.7) (Eurostat, 2016b). The weakness of investment throws the emphasis back on labour market policy -the quantity and quality of workers. The government has mobilised more groups into work, mainly by making alternatives for unemployed and sick and disabled people less tolerable, increasing minimum wages as incentives and improving labour flexibility by reducing employment rights. Benefit cuts are discussed in more detail below in relation to poverty and inequality. It is unlikely that the April 2016 increase in the National Minimum Wage, renamed the National Living Wage, from 48 to 49 per cent of median wages for full-time workers (OECD, 2016b) will have much effect. The amounts are insufficient to address the scale of the problem and much goes to household members who are not in poverty because their partners earn more (Browne and Hood, 2016: 2). The period of employment required before protection against dismissal applies was lengthened from one to two years in 2011, fees for industrial tribunals were introduced and rapidly increased from 2013 onwards (cutting recourse to the system by 70 per cent in one year) and rights to preservation of conditions of employment when public sector workers are transferred to the private sector were diluted in 2014. The 2016 Trade Union Act curtails rights to strike and picket, especially for core public sector workers. Such measures are likely to make workers more compliant and all things being equal may improve productivity. The employment rate of 73 per cent is high, exceeded only by Nordic countries, Germany, New Zealand and the Netherlands among OECD nations. Women's engagement in paid work has increased steadily through the last two decades while men's has declined. However the proportion working more than 30 hours a week is relatively low at 76 per cent, above only Australia and Switzerland. The availability and cost of child care is likely to be an important factor in this. A high level of work participation without an increase in productivity is an inadequate response to globalisation unless wages are kept relatively low. If it proves difficult to stimulate private investment effectively and labour is already relatively cheap, flexible and docile, a remaining strategy to enhance productivity is to improve labour quality. There have been a large number of educational reforms in the past two decades, mainly centring on school management in the move towards academies and free schools, and to testing. These are long-term measures. Spending on education as a whole is relatively high, exceeding that in the US and in EU members, apart from Nordic countries (OECD, 2014, Annex Figure 2), and participation in tertiary level education is also high -the highest in Europe, with 38 per cent of the population having tertiary qualifications. However, vocational education receives many fewer resources, about 70 per cent of that on students in academic programmes (ibid, 3). Spending on adult vocational programmes was cut by four per cent between 2008 and 2014, the largest cut in the EU (Colebrook et al., 2015: figure 4.4). Many countries increased spending in this area in response to the recession. The UK also has a larger proportion of 16-29 year-olds neither employed nor in education and training (16.3 per cent in 2012, against 15.2 per cent in the US, 9.9 in Germany and 15.0 in OECED as a whole: OECD, 2014: 7). These statistics have led many to question whether education policy is unbalanced (see Colebrook,ch 4). The UK also increased student fees to £9,000 for most universities following the Browne review in 2010 so that the choice between higher education and work for more academic students is more pointed. University recruitment continues to rise indicating a continuing imbalance between academic and vocational training (UCAS, 2016). The UK maintains a low-tax, relatively low-labour-cost economy with high employment and weak unions, but lags in productivity. Two factors that may explain the weakness in this area are low investment in capital per worker and low investment in vocational training. This goes hand in hand with an environment in which middle class, more highly educated winners from globalisation prosper while lower class, less-skilled, less-secure and lower-paid workers are left behind. The welfare state regime for the latter has been cut back sharply. One outcome is a deepening division between the two groups. As evidence presented later shows, it is the losers who are most likely to support Brexit. The second long-term challenge is demographic. Government spending in this area is unlikely to address competitiveness. Population ageing The central estimate of population growth by the Office of Budgetary Responsibility (OBR) projects an increase in the proportion of over-65s from 18 to 26 per cent between 2015 and 2065 (OBR, 2016b: chart 2.1). Spending on both pensions and health care is projected to fall as a proportion of GDP during the life of the current parliament, due to rises in GDP, increases in the state pension age and plans to generate substantial cost savings in the NHS. In the longer term, the government's commitment to the 'triple lock', uprating state pensions in line with the highest of RPI, earnings or 2.5 per cent will 'ratchet up average pension payments relative to the economy's capacity to fund them' (OBR, 2015). This led the Work and Pensions select committee to recommend the dilution of the triple lock to indexation by earnings (HoC, 2016) but the 2016 government has committed to retaining it (Hammond, 2016). Achieving the savings plan for the NHS is widely regarded as unlikely (King's Fund, 2016a). The service as a whole is currently in deficit (Appleby, 2016). Future NHS directions are currently being decided through five-year Sustainability and Transformation Plans. Most experts agree that health service futures must depend on successful co-ordination of health and community care. While outcomes are as yet unclear, the King's Fund has expressed doubts that the plans will deliver their objectives because they are overly focused on achieving savings and on rationalising the acute hospital sector, and because there is no mechanism to ensure that adequate health and community inter-linking takes place (King's Fund, 2016b). OBR assumes that pension and health care costs will start to rise again in the early 2020s, as a result of population ageing and technological changes and rising demand for health services. The increased costs in health care are estimated at some two per cent of GDP by 2065 and in pensions at more than 2.5 per cent (OBR, 2016c: chart 3.1; 2016b: chart 4.1). Such a rate of increase is rather lower than that achieved during the past half-century and may be manageable, assuming that society continues to grow richer and that the austerity programme is slackened to allow health and pension budgets to grow, in conflict with current government policies. However OBR points out that further factors, most importantly failure to achieve future productivity gains on the current scale (Corlett et al., 2016: Figure 3) and the likelihood that a richer society will demand higher quality provision, may impose an extra six to eight per cent increase in the longer term (2016c). This will impose severe stress. The 2016 annual report of the Care Quality Commission (CQC) refers to pressures as 'unprecedented' and states that 'the sustainability of adult social care is approaching a tipping point' (CQC 2016: 7). In relation to hospital services, it concludes: 'we are concerned about the sustainability of quality' (CQC, 2016: 9). Despite cash injections, social care services have not enjoyed the same level of protection as the NHS and spending in this area has fallen so that many fewer receive support. The cuts to service spending fell most harshly on local government, which lost about a third of its resources between 2010 and 2015. Local government cuts reduced non-mandatory services enormously and had major impacts even on the statutory responsibilities of social care and children's services. The numbers of over-65s receiving local authority social care services fell by a third between 2009 and 2014 (Burchardt et al., 2015). This creates extra difficulties for the NHS due to bed-blocking as hospitals are unable to discharge frail older people. Support through the Better Care fund and through policies that enable local authorities to raise an extra two per cent rate precept for care (which amounts to some three per cent of total local social care spending) are insufficient to bridge the funding gap (King's Fund, 2016c). In short, the pressures of an ageing population have been contained for a relatively short period at the cost of considerable strain. The kinds of overall structural changes that would guarantee the sustainability of provision in this area have not been pursued. Plans to substitute less expensive delivery of health care in the community for hospital care (currently more than four-fifths of NHS spending) and to introduce an 'escalator' into pension commitments so that payments were related to demographic and price shifts have not been implemented, and the capacity of the non-state sector to share the burden is limited. Instead government has addressed immediate problems through one-off cash injections into the NHS and pension age rises. Funding for community care has not been put on a secure basis nor effectively integrated with hospital care and a package to support private care spending and put it on a nationally uniform basis has not been established, despite a series of proposals (Nuffield Foundation, 2016;DH, 2012;Barker, 2014). The overall impact of population ageing policy is to direct a yet higher proportion of constrained spending towards older people, making it more difficult to develop the human and social investment regime necessary to advance the quality of labour. The UK has managed the immediate pressures from an ageing population with great difficulty and it is unclear how long it will be able to do so. Programmes that favour the old rather than those of working age exacerbate age divisions and do little for productivity. The most striking issue for the UK welfare state is the need to improve productivity in order to compete effectively in a world market, thus increasing resources available to sustain social provision for all age groups. The failure to address the issues of productivity and of improving job quality has contributed to the sense of rejection among those who feel their opportunities are deteriorating. Short-term pressures: policy choices, poverty and inequality The immediate crisis of the UK welfare state concerns its core objective: achieving adequate living standards across the population. The UK is relatively unequal and has high levels of poverty for a developed economy, the highest poverty levels in Western Europe at 23.5 per cent of the population in 2015 by the 60 per cent of median equivalized income measures. In Germany rates are 20.0 per cent, in France 17.7, in Sweden 16 per cent. The only European countries with higher rates are Mediterranean or post-socialist (Eurostat, 2016b). Income inequality, as measured by the ratio of the top fifth to the bottom fifth in 2015, is again the highest in Western Europe, 5.2 in the UK, 4.8 in Germany, 4.3 in France and 3.8 in Sweden (Eurostat, 2016a). Since 2010, governments have cut back benefits for the working-age population, by freezing rates, withdrawing benefits for some groups, introducing restrictions on child and housing benefits and reforming disability benefits with the intention of saving a third of projected expenditure (see Hills, 2015). OBR describes these cuts as 'unprecedented' (OBR, 2016a: 2). Analysis, by the Institute of Fiscal Studies, of the most recent reforms summarised in the May 2016 budget is that they will impact differentially on the poorest three deciles, cutting their incomes by more than six per cent over the life of the parliament and exacerbating inequality (Elming and Hood, 2016). The most important factor is the freezing of benefit rates, despite the likelihood of an increase in inflation to more than two per cent (Bank of England, 2016). The measures to address long-term issues of productivity and competitiveness reviewed above (tax policies that have not succeeded in stimulating investment and labour market policies that keep wages down, weaken unions, and mobilise people into paid work through a combination of benefit cuts and the incentive of a slightly higher minimum wage) have failed to slow the rise in poverty. Patterns of poverty have changed substantially during the past two decades, due to higher participation in paid work (but with greater wage inequalities) and at the same time higher state pensions and greater access to private pensions. Projections into the future, taking account of estimates of growth, inflation and changes in employment, indicate that the trends to working poverty and to greater inequality are likely to continue. Although wage rises will average over one per cent a year, the benefit cuts for lower income workers outlined above will roughly cancel this out. Relative pensioner poverty will remain unchanged, but relative child poverty will rise from 17.8 per cent in 2015-16 to 25.7 per cent by 2020-21, wiping out almost all improvement since 1997-8 (Browne and Hood, 2016: 2). UK labour costs are likely to remain low, so that the argument for investing to improve the quality and availability of labour becomes even stronger. In short, the UK welfare state faces serious long-term challenges both from globalisation and labour market change and from population ageing. Current policy directions are failing to address those problems, but deepen social divisions and bear most heavily on the most vulnerable groups, leading to higher poverty, particularly among low-paid people who fail to benefit from recent changes. They contribute to disillusion with the political elite among this group, and a perception that the more open markets championed by the EU damage their interests. Social and political divisions Social divisions and welfare state policies The UK polity is a first-past-the-post majoritarian system with two major parties, although new political formations around national identity and green politics have emerged in recent years. In such a system social divisions between winners and losers can be self-reinforcing. If those who believe they are advantaged by a particular policy are sufficiently numerous and well-mobilised to have an impact on voting that exceeds that of the losers, they can command policies that sustain the division to their own benefit, as they see it. There are indications that this 'winner-takes-all' (Hacker and Pierson, 2010) logic applies across three areas in relation to the UK welfare state: tensions between pensioners and those of working age, particularly those on low wages or unemployed; tensions between better and worse-off; and tensions between immigrants and established residents, often bound up with issues of ethnicity and national identity and of particular relevance to Brexit. Other divisions (between the interests of men and women, between those of different sexuality, and between regions have less effect on social policy and are not discussed here). The European Quality of Life survey identifies the UK as having the highest levels of tension in Western Europe between old and young people (74.5 per cent report 'a lot of ' or 'some' tension), and coming after France in tension between rich and poor (86.7 per cent) and after only France, the Netherlands and Belgium in tension between racial or ethnic and religious groups (88.9 and 83.7 per cent respectively). These divisions have been reinforced by the way the rapid rise in immigration and asylum seeking in recent years has been managed (see Taylor-Gooby et al., 2017). Social tensions and conflicts damage quality of life. They also provide opportunities for political parties to muster support from the winners in social divisions. This is particularly important in the majoritarian UK, compared with the more consensus-oriented systems common across Europe, which facilitate negotiation and coalition between different groupings (Bonoli and Natali, 2012: ch 1). In the UK, older people are more likely to vote Conservative than younger people. This is reflected in the overt generosity of the current government to the services they most use (the 'triple lock' and the repeated promise to ring-fence NHS budgets) in contrast to the major cuts elsewhere. Spending on pension benefits was close to that on non-pension benefits in 2010 at 7.8 and 7.4 per cent of GDP respectively. By 2016, the percentages were expected to have diverged to 8.2 and 5.9 per cent, reflecting cuts in benefits for younger age groups and support for older groups (Eurostat, 2016a). In practice, increases in retirement age more or less cancel out the cost of the triple lock and cuts to social care and housing budgets mean that total spending on older people is projected to decline as a proportion of GDP, but more slowly than spending on younger people (Taylor-Gooby, 2016: figure 10). However, as pointed out above, this is a short-term fix: spending will rise in relation to GDP in the longer term and, all things being equal, deepen the deficit. The age strategy is successful in gaining political support. In the 2005 election, 29 per cent of those aged 55 or over declared an intention to vote Conservative as against 24 per cent for Labour. By 2015, the percentages were 34 and 23 per cent (Ipsos-Mori, 2015 and 2010). The effect is redoubled because older people are roughly twice as likely to actually cast their votes as younger people. Further divisions in policy for working-age people between those in employment and those out of it are reflected in the move from National Minimum Wage to a slightly higher National Living Wage and the increases in the income tax threshold, set against the transition to Universal Credit, the tightening of the 'benefits cap' and the associated benefit cuts. The new policies ensure that those on benefits receive substantially less than low-paid workers in work: for a family of two adults (both unemployed) and two children, benefit income was equivalent to 61 per cent of median earnings (assuming both worked full-time) in 2010, but had fallen to 57 per cent by 2014 (OECD, 2016b). This gap will widen as the freeze in benefit rates from April 2016 (Turntous, 2016) affects unemployed people at a time when wages are expected to rise. These policy differences appear to be reflected in voting. While good data on voting by benefit claimants is not conveniently available, 45 per cent of middle class AB people declared an intention to vote Conservative in 2015 as against 26 per cent for Labour, a wider party gap than the 37 and 28 per cent in 2005. The recent rise in immigration from EU and non-EU countries has been particularly vulnerable to politicisation. Immigration fluctuated between two and three hundred thousand a year between the early 1970s and late 1990s and then rose rapidly to over 600,000 a year, chiefly as the result of the accession of new EU members and the impact of Middle Eastern wars (ONS, 2017). This has led to real concerns among traditionally right-wing and also among traditionally Labour-supporting working class voters about competition for jobs, housing and school places (Dustmann et al., 2016). Among black and minority ethnic voters (categorised as one group, due to the relatively small numbers in the Ipsos-Mori survey sample) 23 per cent intended to vote Conservative, versus 65 per cent Labour, compared to 14 and 64 per cent in 2010. Conservatives gained ground among older, middle class and black and minority ethnic voters. Net migration fell by about 50,000 between July and September 2016 after the Brexit vote, half the change due to a fall in immigration and half to a rise in emigration, shared equally between EU and non-EU citizens (ONS, 2017: Figure 2). These figures are highly provisional but may indicate that the referendum outcome is having an impact. Divisions in relation to age and working status can be seen in terms of the coincidence of interest and ideology. Since the establishment of welfare states, by far the lion's share of provision has been directed to the needs of older people. The interests of the mass population who feared poverty when they were too old to work, were reinforced by those of employers who found pensions helpful in the process of replacing older workers with more energetic, highly skilled and cheaper younger workers. The needs of old age have traditionally topped the list of deserving areas of social provision (Coughlin, 1980;Cook, 1979;Taylor-Gooby, 2015: 13). Old age dependency has risen from 299 over 65s for every 1000 population in 1990 to 307 currently and is expected to reach 395 by 2065 (ONS, 2016). Conversely those of working age have lost out, and this is indicated by two developments bound up with changes in labour process and sectoral shifts away from manufacturing: the long-term fall in the share of growth going to workers, evident in advanced countries since the late 1960s (OECD, 2015b) and the declining influence of labour, encapsulated in the fall in union membership. Union membership peaked at about 50 per cent in the UK in the mid-1950s and has now fallen to below half that (OECD, 2015a). These divisions are reinforced by the conflict between immigrants and nationals, linked to conflicts between dominant and minority ethnic, racial and religious groups and to divisions of interest. Kriesi and others argue that, in general, the better educated, more highly skilled and wealthy do well out of more open markets and are in a position to grasp the opportunities brought by globalisation (Kriesi et al., 2012;Teney et al., 2013). The less fortunate, skilled and supported do worse. Hence it is the latter group who provide a fertile recruiting ground for anti-immigrant politics (Ford and Goodwin, 2014) and for anti-EU campaigns (Van Elsas et al., 2016;Hakhverdian et al., 2013). Globalisation can be represented as an opening up of the national economy to competitive market forces and of national borders to immigration (Davies, 2016). UK governments have failed to develop policies which compensate the losers from these processes or improve their skills and productivity so that they can grasp opportunities from it. The divisive short-term policies which sustain government popularity, but do little to address long-term problems, bear most heavily on those who lose out from structural changes in the economy. The losers from globalisation are open to political movements which assert national identity and strong national control of borders as the most relevant response and fuel their antagonism to institutions identified with market openness and free movement, such as the EU. From this perspective, the divisions that surrounded the Brexit vote are based on the experience of globalisation as an oppressive force and the desire for strong national government to protect more vulnerable individuals who see themselves as losing ground to external forces. This perspective is reinforced by what we know about differences between Leave and Remain voters. There are two main kinds of evidence available: attitude surveys and data on voting patterns by ward, which can be related to demography, politics, degree of deindustrialisation and the impact of spending cuts. The attitude data for the period before the referendum paints a picture of voter opinion as sharply divided by both cultural and socio-demographic factors: national identity, attitudes to immigrants and attitudes to the EU and to the UK governing elite, as well as social class, age, area of residence, occupation and level of education and skill (Swales, 2016;Kaufman, 2016). All attitude surveys agree on three points: there are clear socio-demographic differences between Leave and Remain voters or intending voters. The group most decisively supporting leave tends to be more working class, less well educated, lower skilled and to live in de-industrialising and northern areas. Earlier work (for example, analysis based on BSA 2015) also identifies an older more middle class group distinguished by strong national identity that tends to support exit (Swales, 2016). Secondly, there are clear differences in values and beliefs: Leave voters are more concerned about the damage they believe immigrants do to the economy and more likely to see immigration from the EU as essentially a burden. They are also more likely to believe that leaving will benefit the British economy (Natcen, 2016). Thirdly, the Leave campaign gained ground in the run-up to the vote and was more successful in getting supporters, who held their views more strongly, (Swales, 2016) to vote (Curtice, 2015). The main differences lie in the relative importance of cultural factors and identity politics: the pre-referendum surveys are more likely to point to the importance of the former. This suggests that some of the earlier data which is widely used (in particular from the 2015 British Social Attitudes (BSA) and British Election Surveys and commercial polls) may not tell the full story. There are also polls conducted very close to the 23 June referendum by Hobolt and Wratil (2016) and shortly after the date by BSA (Curtice, 2016). Hobolt and Wratil's YouGov survey of 5000 voters conducted in May 2016 shows a clear division between the way 'Leave' and 'Remain' voters understood the issues: while the former expressed concerns about immigration and lack of trust in the UK government and the EU, the latter stressed economic benefits (Hobolt, 2016: Table 1): 'fears of immigration and multiculturalism are more pronounced among voters with lower levels of education and in a more vulnerable position in the labour market. Such voters also voted most decisively for Leave, whereas the 'winners' of globalization -the younger and highly educated professionals -were overwhelmingly in favour of Remain,' (Hobolt, 2016(Hobolt, : 1273. Further analysis of a sub-group of 1396 BSA 2016 participants, re-interviewed after the referendum, also shows very clear differences in the understanding of Leave and Remain voters on how Brexit will affect the UK (Curtice, 2016). Post-referendum studies of the distribution of the vote deal with sociodemographic factors and local circumstances. A thorough study by Becker and colleagues reports: . . . the share of the population aged 60 and above as well as the share of the population with little or no qualifications are strong predictors of the Vote Leave share. Furthermore, areas with a strong tradition of manufacturing employment were more likely to Vote Leave, and also those areas with relatively low pay and high unemployment. We also find strong evidence that the growth rate of migrants from the 12 EU accession countries that joined the EU in 2004 and 2007 is tightly linked to the Vote Leave share. . . . In addition, we find that the quality of public service provision is also systematically related to the Vote Leave share. In particular, fiscal cuts in the context of the recent UK austerity programme are strongly associated with a higher Vote Leave share. We also produce evidence that lower-quality service provision in the National Health Service is associated with the success of Vote Leave. (Becker et al., 2016, 38-9) This leads to the conclusion that: In terms of policy conclusions, we argue that the voting outcome of the referendum was driven by long-standing fundamental determinants, most importantly those that make it harder to deal with the challenges of economic and social change. They include a population that is older, less educated and confronted with below-average public services. (Becker et al., 2016: 39) The evidence is complex, limited and of varying quality. Overall it supports the approach of this article, that Brexit is best understood as a response to long-term structural factors as they are understood by the population, rather than cultural issues, exacerbated by recent policies. The groups most affected by globalisation and labour market change and deindustrialisation, who were least well served by policies which favour the better off and pensioners, were much more likely to vote Leave. We move on to consider how exit from Europe will affect social divisions in the future. The impact of Brexit Brexit negotiations appear likely to go ahead despite the fact the June 2017 election result dramatically weakens the UK's position. The UK government wishes to have control over EU immigration, a continuing close relationship with the EU market and also much more open market relationships globally. Whether these objectives are compatible is unclear, and the precise impact on British voters especially those who supported Brexit must depend on the outcome of negotiations and the extent to which new government policies redirect resources to improving their skills and opportunities (DEEU, 2017). Here we comment on possible economic and political effects. The Brexit-sceptics point out that the ruling Conservative party gains very substantial support from the finance sector (possibly half of its funding: Financial Times, 2015), and that a thorough-going withdrawal is highly likely to damage the capacity of this sector to trade across Europe (MacShane, 2016;Brooks, 2016). A possible outcome is that the terms of exit are so diluted as to generate continuing conflicts over the influence of the EU on the UK and over control of borders. In any case, the government has signalled a liberal commitment to open markets and economic globalisation so that the pressures on winners and losers from labour market change will continue and the competitiveness imperative will, if anything, be strengthened. Assuming Brexit is pursued, the only economic fact we have is the fall of the pound in international markets, by 15 per cent against the Euro over the last twelve months (London Stock Exchange, 2017). Short-term predictions by ONS suggest a slow-down in growth (but not a recession) and a rise in inflation to 2.5 per cent by 2018-19 (Corlett et al., 2016: Table 1). These changes will increase export opportunities, reduce earnings and domestic consumption and intensify the impact of the benefit freeze on the working-age welfare state. Without a sharp reversal of current policies, they will deepen the divisions between old and young and those in and out of work noted above. In the longer term, outcomes are likely to lie between an optimistic and a pessimistic scenario. Optimistic An optimistic scenario would require the UK to develop short-term policies that help address the long-term crises of competitiveness, investment, poverty and inequality. Recent policies fail to address any of these issues but have bought time in relation to population ageing at the cost of higher retirement ages and considerable damage to the interests of other age groups. A positive outcome would require investment in education and training capacity and in research and development to support competitiveness. In any post-Brexit world the UK seems unlikely to have the ease of access that it currently enjoys to the markets of the most developed and convenient regional economies, although it may gain an improved access to a broader highly competitive world market where wages are lower in many participant countries. This suggests that the gap between winners and losers will widen. One possibility is a sharp reduction in living standards in the UK, especially for the least educated and those working in industries with low levels of investment. Government could take the opportunity to compensate losers and to improve the quality and utilisation of workers' skills. In addition to greater support for productive services, provision for the needs of older people through transfers and health and social care services will need to expand. This requires extra resources in staff and finance. It is difficult to see how more workers in the health and care industries can be provided without ensuring that the country remains attractive to immigrants with the relevant skills. A recent EU Health Observatory report shows that some ten per cent of doctors in the UK were from EU countries and 28 per cent from elsewhere abroad (Buchanan et al., 2014: 277). About six per cent of the adult care workforce is from EU countries and about 12 per cent from elsewhere (Skills for Care, 2016). The decline in the value of the pound will stimulate exports and act as a brake on imports. The International Monetary Fund predicts that the balance of payments deficit will fall from about six to about four per cent of GDP by 2020, largely as a result of the devaluation (IMF, 2016). Devaluation, however, will not in itself address the problems caused by low productivity. Improvement in labour capacity demands investment in education, especially in training and in lifelong learning and skills renewal. In the longer term it may be that improvements in work-force quality will feed through into higher productivity, higher employment and better wages, summed up in the aspirations of the EU 2000 Lisbon Conference for: 'the most dynamic and competitive skills based economy in the world' at EU level (EU, 2020). Investment in employment opportunities will also be required and here the role of a state-funded investment bank could be important. Some extra finance for these policies and for health and social care could be provided by reversing some tax cuts particularly at the top end, but any attempt to provide major new funding will require a move towards borrowing to expand social investment. The new apprenticeship programme from April 2017 (DfE, 2017) may go some way towards providing this by tapping substantial extra finance (up to £2.5bn) from a levy on employers, but sceptics point out that the quality of most apprenticeships has been diluted (Crawford-Lee, 2016;Saraswat, 2016). Productive work requires good working conditions, including parental rights and bargaining power to ensure any increase in productivity feeds through into better pay. Some will always lose out and this requires benefits adequate to meet people's needs plus support and opportunities to enter work. The optimistic scenario sees the government investing heavily in its workforce and in its social provision, in national productive capacity and technology to ensure that the country prospers when it confronts the world market directly. It is possible that greater equality between old and young, better opportunities for most people, and a rise in real wages at the bottom could help heal some of the divisions in our society. It is clear that this programme comes as a package. Growth and productivity-oriented policies require better training and investment and in turn generate the resources that can be used to help more vulnerable groups. Pessimistic The converse direction is that the UK continues to drift towards a future of sharper social divisions. If government continues to focus primarily on short-term objectives and does little to address the long-term crises, these crises will deepen and the cost of tackling them become more formidable. The pessimistic scenario envisages a failure to address issues concerning the quality of the workforce, the opportunities available to people and investment in business, so that productivity remains relatively low and the UK can only compete by cheapening labour. Living standards fall, especially for the losers, it becomes more difficult to raise the taxes necessary to finance good services for older people and others in need, and the welfare state withers. It is likely that competition for available resources will grow more intense as the total quantity falls, and opportunities for politicians to win elections by capitalising on discontents and divisions expand. The negative scenario is one of contraction and further conflict. Drivers of change We have argued that the division between winners and losers from the economic and social transformations of our time lies behind the Brexit vote, and that Brexit is likely to depress living standards for those in the weakest competitive position in the world market and deepen that division. A withdrawal from EU trade is likely to damage growth prospects. Petrongolu (2016) points out that the 'average hourly wage in the 15 UK industries with the highest concentration of immigrants from the 2004 accession countries is £9.32, significantly below average UK-born wages of £11.07'. This suggests that, if a greater proportion of UK workers move into those jobs, average wages for UK workers will fall or the jobs will remain unfilled. Reduced immigration will impact on living standards. This may lead to further disillusion with the political elite and policy-makers and deepening political instability. Conclusion This article has argued that the political context of policy making in the UK militates against effective strategy-building to tackle the long-term structural issues of globalisation and population ageing and that this lies behind enthusiasm for Brexit among some groups. Recent government responses have failed to address these issues, have diverted attention from them and, in some ways, exacerbated them. These policies have reinforced divisions between old and young, winners and losers from globalisation, and nationals and immigrants. One outcome has been the visceral rejection of globalisation and the embracing of a chauvinist and protective nationalism that contributed powerfully to the vote against EU membership, understood as a vote against globalisation and open borders and against remote and mistrusted government by those who believe that the political class no longer has their interests at heart. It is unclear how current policies will do anything to address these issues or redress the profoundly unequal impact of globalisation and technological change on life-chances or whether the June 2017 election outcome will lead to a substantial difference in direction. One possibility is that the experience of exit from the EU generates support for national policies that compensate losers from globalisation and equip the workforce, particularly at the bottom end, to compete successfully in an international marketplace. Indeed the shock of exit might provoke a national debate that makes it politically feasible to direct policy towards these goals over the life of several parliaments. Another is that it fails to do so and continues the current trajectory of poverty and inequality and a weaker national capacity to address longer-term problems, the double crisis writ large. It is entirely unclear where the tipping-point between the two responses lies.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-11-16T00:00:00.000
5795334
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0026571&type=printable", "pdf_hash": "50d6a58e500402cf351ac161ca34f6f10bb316ec", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1245", "s2fieldsofstudy": [ "Psychology" ], "sha1": "50d6a58e500402cf351ac161ca34f6f10bb316ec", "year": 2011 }
pes2o/s2orc
Disgust Enhances the Recollection of Negative Emotional Images Memory is typically better for emotional relative to neutral images, an effect generally considered to be mediated by arousal. However, this explanation cannot explain the full pattern of findings in the literature. Two experiments are reported that investigate the differential effects of categorical affective states upon emotional memory and the contributions of stimulus dimensions other than pleasantness and arousal to any memory advantage. In Experiment 1, disgusting images were better remembered than equally unpleasant frightening ones, despite the disgusting images being less arousing. In Experiment 2, regression analyses identified affective impact – a factor shown previously to influence the allocation of visual attention and amygdala response to negative emotional images – as the strongest predictor of remembering. These findings raise significant issues that the arousal account of emotional memory cannot readily address. The term impact refers to an undifferentiated emotional response to a stimulus, without requiring detailed consideration of specific dimensions of image content. We argue that ratings of impact relate to how the self is affected. The present data call for further consideration of the theoretical specifications of the mechanisms that lead to enhanced memory for emotional stimuli and their neural substrates. Introduction Memory for emotional events is typically better than memory for comparable non-emotional events. Memories of significant, deeply affecting public events tend to be subjectively vivid and long-lasting, as evidenced by the flashbulb memory phenomenon [1]. Memory is also enhanced for emotional relative to neutral material presented under simplified laboratory conditions, and this effect persists over long intervals [2, 3,4]. For example, increased retention has been observed for emotional relative to neutral narratives [5], and emotional relative to neutral pictures and words [3,6]. The dominant theoretical account maintains that the effects of emotion on memory are mediated by arousal, where arousal relates to some physiological state of excitement or activation [7,8]. Arousal theory as applied to memory is closely coupled to wider dimensional models of the structure of emotion. Dimensional theorists claim that emotional experience can be captured by a small number of orthogonal dimensions [9,10]. The major affective dimensions are pleasantness (often referred to as valence) and arousal. Whereas the pleasantness dimension ranges from negative to positive, the arousal dimension ranges from calming/ soothing to exciting/agitating. Both dimensions have been implicated in emotional memory, though arousal has received particular attention. Converging evidence from both animal and human studies supports a role for arousal in emotional memory. Animal studies have demonstrated that increased states of arousal improve memory [11,12,13], and that the amygdala plays an important role in retention. Most human studies have focused on facilitation of memory for arousing 'representational' stimuli, such as pictures of emotional scenes or words [2, 3,6]. Human studies typically employ emotional and neutral stimuli from databases such as the International Affective Picture Series (IAPS) [14] on the basis of normative subjective ratings of valence and arousal [14,15,16,17,18]. On the basis of arousal theory, it would be reasonable to predict that increasing levels of arousal would produce corresponding improvements in retention. This hypothesis has received support in some studies that have varied arousal parametrically, while keeping other stimulus attributes constant [17]. However, evidence from multiple sources indicates that the relationship between rated arousal and memory is not always straightforward [19]. For example, Bradley et al. [2] divided emotional images into five categories that ranged from highly unarousing to highly arousing; only highly arousing images produced significantly enhanced memory. Counterintuitive findings have also emerged in younger adults [20], where memory was enhanced for low relative to high arousal negative items. The authors entertained the possibility of a Yerkes-Dodson type of explanation, in which memory is optimum at moderate levels of arousal but reduced at lower or higher arousal levels. However, in a review of the literature, Christianson [21] reported little evidence for the application of the Yerkes-Dodson law to emotional memory. The emphasis on arousal and valence dimensions within the emotional memory literature has inevitably drawn attention away from the potential influence of other factors, including the basic emotion categories postulated by Darwin [22] and others [23,24,25,26,27], or indeed, correlates of these categories. A comprehensive review of the literature on emotional memory concluded that findings have been severely limited by the consideration of emotion as 'merely arousal', and that a more complete understanding will result only from considering the contributions made by categorical affective states, as well as attributes linked to cognitive appraisal [7]. Though there is variation according to the particular taxonomy, basic emotions have included disgust, fear, anger, happiness, sadness, and surprise. Some of these have very similar profiles in the dimensional model, but nonetheless distinct qualities, and in some cases, distinct neural bases [28]. Fear and disgust are of particular significance for evaluating the effects of arousal as neurophysiological evidence indicates that the two are associated with partially distinct neural substrates [28,29]. Furthermore, psychophysiological evidence and behavioural ratings indicate that fear is a high arousal unpleasant emotion, whereas disgust is a moderate arousal unpleasant emotion [30], or certainly no more arousing than fear. If arousal is causal, then a straightforward prediction would be that memory for disgusting stimuli should be worse, or certainly no better than memory for frightening stimuli of equal unpleasantness. However, there are reasons to suppose that this might not be the case. Charash & McKay [31] failed to find a recall advantage for fearful words, reporting a recall advantage for disgusting over frightening words instead. The conclusions drawn from that study were not entirely persuasive as the disgust, threat, and neutral word lists were matched for word frequencies but not for affective variables such as pleasantness and arousal. Nevertheless, additional research has shown that another factor known to influence recollection -attention to the to-be-remembered material at encoding [32,33,34] -is associated to a greater extent with disgust than with fear-related stimuli. Relative to frightening stimuli, disgusting stimuli more readily engage interest and attention during the first 500 ms of viewing, demonstrated using eyetracking methodology [35]. Another key issue in the emotional memory literature surrounds the extent to which emotion has dissociable influences on distinct memory processes. In an elegant study, Ochsner [17] employed the Remember/Know paradigm to study recognition memory for pictorial emotional stimuli. The term remembering refers to a positive recognition response that is accompanied by recollection of the encoding context, such as thoughts, feelings, and sensory details that were experienced when the stimulus was first presented. The term knowing, on the other hand, refers to a positive recognition response that is associated with the knowledge that the stimulus is familiar and has been seen before but without any recollection of the episodic context. Whereas previous research had shown that highly arousing or highly negative or positive materials are better retained than neutral ones [2], Ochsner [17] demonstrated that these effects are most apparent for measures of recollection rather than familiarity, and that both pleasantness and arousal have significant, independent effects on recollection. Given that investigations of emotional memory will benefit from consideration of categorical affective states, it seemed important to revisit this topic using well-controlled stimuli. The current research compared memory for images of equally negative disgusting and frightening scenes; this addressed the role of arousal directly, as there is reason to expect that lower-arousal disgusting images may be better recollected than higher-arousal frightening ones. In addition to matching the two image sets for pleasantness, it was also considered important to match them for distinctiveness, as distinctive items are often well-remembered [36,37,38] and for visual complexity, an attribute used to match stimulus categories in previous studies of emotional memory [17,18]. Following the lead of Ochsner [17] and Kensinger & Corkin [6], the Remember/ Know procedure was employed to assess recognition memory and to derive estimates of recollection-and familiarity-based memory. On the basis of previous findings [6,17], we predicted that any effects of emotional content would be particularly evident in estimates of recollection for the episodic context. Results and Discussion Experiment 1: familiarity and recollection of disgusting versus frightening images Experiment 1 compared recollection and familiarity for images of disgusting, frightening, and mildly positive scenes. The latter were used as a control, in preference to neutral scenes, so that the three image categories contained some emotional meaning. The use of a positive image category as a baseline condition also enabled us to match baseline and negative image categories on a key content attribute, the presence of people, as most neutral images in the IAPS feature only objects or buildings. As noted above, the Remember/Know procedure was employed to assess recognition memory. The dependent measures were estimates of recollective memory, calculated from remember hits and false alarms, and familiarity-based memory, calculated from know hits and false alarms [39]. Our prediction was that recollection estimates for disgusting images would be significantly greater than for frightening ones. A repeated-measures ANOVA on participants' ratings of image redness in the encoding session showed no significant effect of image category, F (1.42, 42.72) = 2.33, p.0.1 (Greenhouse-Geisser corrected). Hence, any memory differences cannot be attributed to differences between the stimuli on the feature judged at encoding. Mean recollection and familiarity estimates for the disgusting, frightening, and positive images are shown in Table 1. Proportions of hits and false alarms for the remember and know responses are also shown. All post hoc comparisons were Bonferroni corrected for multiple comparisons, with corrected p values reported throughout unless stated otherwise. The findings are consistent with recent results indicating that disgusting images preferentially attract attention when presented simultaneously with frightening images [35]. They also indicate that the previous advantage for disgust-related words [31] in a recall memory experiment extends and generalises to recollection and familiarity estimates for disgusting images that are matched carefully with frightening images for pleasantness, approachavoidance, distinctiveness, visual complexity, anger and sadness. Perhaps most importantly, the current findings of increased memory for disgusting relative to frightening images raise questions about accounts of emotional memory effects that rely on arousal as the primary explanatory factor. The disgusting images were rated as significantly less arousing than the frightening images, and although this difference was small, the arousal account would certainly not predict improved recognition memory for disgust relative to fear images. Similarly, increased memory was observed for the disgusting relative to positive control images, even though these two stimulus categories were matched for rated arousal. Thus, under circumstances where tightly controlled materials were employed, the predictions of the theory linking arousal and memory were not supported. The memory advantage for disgusting relative to frightening images was found for both recollection and familiarity. By contrast, previous work has shown the influence of pleasantness and arousal on emotional memory to be primarily on recollection estimates [17]. In addition, whereas Ochsner [17] found that negative images were recollected more readily than positive images, Experiment 1 found that the negative image advantage was greater for disgusting than for frightening images, despite both image sets being matched for pleasantness, and negative valence in particular. Thus, the memory advantage for disgusting images cannot be attributed to variation in this dimension either and must instead derive from attributes other than valence and arousal. Our data so far suggest that not all arousing images have a strong enough effect upon participants to result in significantly increased recollection relative to carefully-controlled stimulus sets; therefore factors other than arousal and valence need to be identified and integrated into causal accounts of emotional memory. This idea is consistent with the results of a recent review of the emotional memory literature [7] in which the contributions of categorical affective states (e.g. basic emotions) and attributes based in cognitive appraisal theory were considered essential to an understanding of emotional memory. It has been similarly suggested that emotional stimuli engage semantic information and appraisal processes [17] as well as incorporate non-emotional attributes [40] that could affect memory in ways that remain to be investigated. Experiment 2 explored dimensions other than valence and arousal that might account for the enhanced retention of disgusting images. Experiment 2: exploring dimensions that may account for enhanced memory of emotional images It is possible that our disgusting stimuli were more memorable simply because they related to disgust. Another possibility is that our disgusting images might have been more memorable because they weigh particularly heavily on psychological attributes or processes that facilitate retention -these attributes or processes would not be considered to be specific to the emotion disgust, but to also contribute to the memory advantage observed for emotional stimuli more generally. Memory researchers and appraisal theorists have suggested a range of plausible influences on memorability. Together, they have identified a range of attributes that are important in the generation of varied emotional experiences and reactions [41]. These include, but are not limited to, salience, thematic relevance to the self, incongruity, meaningfulness, and importance. Given the presence of strong correlations between at least some of these factors, notably importance (i.e. consequentiality), meaningfulness and memorability [42], what the constructs share is plausibly more significant than the specifics of the individual dimensions identified. We have previously argued that attributes such as those identified as important in the generation of emotional experience, may contribute, individually or collectively, to a factor that has been the focus of recent empirical investigations -the immediate impact an item has on an individual [43,44]. The term impact derives from photojournalism where it is used to describe powerful and striking images [45]. In recent behavioural and neuroimaging studies, impact has been shown to influence the allocation of visual attention [44] and also the amygdala response [43] to negative emotional images, to high versus low impact image sets that have been carefully matched on a number of stimulus attributes, including arousal and valence. These studies argued that ratings of impact reflect an individual's undifferentiated reaction to the image and thus index the immediate significance or relevance of a stimulus for the self. Given that increasing or dividing attention to encoded information is known to lead to enhanced and reduced remembering, respectively [32,34,46] and that the amygdala is a brain structure known to be important for emotional memory [47,48,49,50,51,52], it follows that ratings of impact may be a strong predictor of emotional memory in the present study. A preliminary visual inspection of the most and least frequently remembered images across our image categories suggested that the former did indeed have immediate and strong effects upon the viewer, providing some initial support for this idea. This effect was not limited to the disgusting images, however, as several frightening images that were frequently remembered also appeared to share this eye-catching quality. As noted above, the impact of these images upon the viewer could relate to a number of different factors such as the extent to which the images were incongruent with participants' previous experience, such that their meaning or significance was in one way or another difficult to grasp. One example is a picture of a man kissing the side of a woman's head. The woman is bruised and bloodied and appears to be unconscious or dead. It is unlikely that participants would have seen an image like this previously. Thus, the image as a whole is incongruous with previous experience, makes a strong impression, and is consequently, well-remembered. The purpose of Experiment 2 was to determine which attributes are the strongest predictors of the emotional memory effects observed in Experiment 1 by having participants rate the stimuli on a number of dimensions. To explore the contribution of impact, identified in previous research as a key determinant of heightened attention and amygdala response to negative emotional images, we employed an impact rating scale; this scale captured participants' undifferentiated response to images as indexed by the immediate impact these images had upon the participant. In the Introduction we noted that ratings of arousal may relate to some physiological state of excitement or activation. As it is possible that ratings of impact may likewise have a physiological basis, we also included ratings of participants' negative and positive body state reactions to the images. Both Tulving [53] and Gardiner [54,55] have argued that remember responses are often based on remembered thoughts and feelings. To assess this sort of elaborative processing, we included a fourth rating scale that indexed the number of thoughts and ideas evoked by the content of each image. These four new ratings (impact, negative and positive body state reactions, and ideation) supplemented our earlier ones (arousal, pleasantness, approach-avoidance, distinctiveness, and visual complexity) to enable us to determine which factors best predict picture recognition. The prediction was that images that were frequently remembered in Experiment 1 would be rated high in impact by this separate group of participants and that ratings of body state and ideation might relate to judgements of impact. A summary of mean ratings, and Mann-Whitney comparisons between the three image categories employed, is shown in Table 2. It was not considered appropriate to calculate recollection estimates in an items analysis as different participants contributed remember hits and remember false alarms. Instead, stepwise multiple regression analyses were conducted to isolate the variables that predicted remember hit rates. Indeed, in Experiment 1, the correlation between recollection estimates and remember hit rates was extremely high, r (32) = 0.97, p,0.001. The predictors entered into the multiple regressions included the four personal reaction ratings (impact, negative and positive body state reactions, and ideation), and also the five previously collected ratings (arousal, pleasantness, approach-avoidance, distinctiveness, and visual complexity). Across the disgusting, frightening and positive images, the best regression model had an AdjR 2 of 0.30, F (2, 105) = 22.00, p,0.001. There were two significant predictors of remember hit rates: impact (beta = 0.33, p,0.001) and distinctiveness (beta = 0.26, p,0.05). However, when only the negative (disgusting and frightening) images were included in a similar analysis, impact was the only predictor: this model had an AdjR 2 of 0.32, F (1, 70) = 33.53, p,0.001. Impact was a highly significant predictor, with a beta of 0.57, p,0.001. The latter analysis therefore suggests that the contribution of distinctiveness to remembering was attributable to the inclusion of the less distinctive (and less well remembered) positive images. Crucially, arousal ratings did not emerge as a predictor of remembering. This suggests that the memory effects attributed to differences in arousal ratings in previous investigations [2,17] may in fact have resulted from other factors. Further stepwise multiple regressions were conducted to discover what image characteristics were the strongest predictors of impact ratings. A first analysis examined the disgusting, frightening and positive images. The independent variables specified above were entered, with the exception of impact, which constituted the dependent variable. The resultant model was highly significant, AdjR 2 = 0.64, F (4, 103) = 48.40, p,0.001, and revealed that four aspects of the Experiment 1 images were important contributors to their impact level. Negative body state reaction was the strongest predictor of impact (beta = 1.07, t = 7.20, p,0. Arousal was not identified as a significant predictor in any of the regression models. Thus, arousal, conceived by many dimensional theorists as an ''excitement'' dimension and associated with physiological arousal, does not fully capture either the affective properties of emotional memory or the conceptual basis of impact. Importantly, the multiple regression analysis does not discount some association between arousal and impact; indeed, impact was found to significantly correlate with arousal, r (108) = 0.45, p,0.001. Rather, it shows that image properties other than arousal are more important in determining impact. General Discussion The goal of this research was to examine the influence of disgusting versus frightening negative emotional images on recognition memory and to explore factors that contribute to heightened recollection. The results of Experiment 1 indicated that disgusting scenes were retained to a greater extent than images of equally unpleasant frightening scenes, with larger recollection estimates calculated for disgusting scenes. Importantly, this pattern emerged even though the disgusting images were rated as less arousing than the frightening ones but were otherwise matched for unpleasantness, distinctiveness, visual complexity, and other stimulus attributes. In Experiment 2, multiple regression analyses showed that improved recollection was best accounted for by differences in the immediate impact of the images, and not differences in arousal or pleasantness. These data indicate that the memory advantage for disgusting images may derive from attributes other than valence and arousal, dimensions that feature prominently in the emotion memory literature [17,40,56]. Contrary to the widely held view that arousal in particular is a major determinant of emotional memory effects, the present data suggest that a construct that we refer to as 'impact' may offer a more adequate explanation. A role for impact aligns with our previous work showing that impact influences the allocation of visual attention to negative emotional images [44], as increased attention to encoded information is known to enhance recollection [32] whereas divided attention is known to reduce recollection [34,46]. It is furthermore consistent with research showing a heightened amygdala response to high versus low impact negative images [43], as both lesion and functional neuroimaging studies [47,48,49,50,51,52] have demonstrated that the amygdala plays a central role in emotional memory. The current findings also correspond with research showing that adolescents' and adults' appraised impact of the September 11 th US attacks predicted recalled intensity of sadness, anger, and anxiety, changes in memory for these emotions over time, and symptoms of post-traumatic stress [57]. Participants' ratings of impact indicated the extent to which they were affected by the images, that is, how much they felt the image content created an instant impact on them personally. Our procedure for rating impact used short presentation durations, and the instructions emphasized immediate judgments based on generic senses or feelings [58] without deconstruction of their elements. In previous work [43,44], we have argued that impact ratings index participants' undifferentiated emotional response to a stimulus, without requiring more detailed consideration of specific dimensions of image content. To illustrate, an individual might have an undifferentiated 'Yuk' or 'What the …?' reaction to a specific image, without explicit consideration of how disgusting or arousing that image might be. This is relevant to a distinction that has been drawn between emotional images inducing a genuine emotional reaction (e.g., making you feel 'sick to your stomach'), and those being coldly appraised as having 'affective quality' (e.g. a cold evaluation that an image depicts a nauseating scene) [58]. It is possible that arousal and pleasantness ratings failed to predict memory well because these ratings can be more readily made on the basis of intellectuallydetached judgments of affective qualities. In contrast, rated impact may have been a stronger predictor because it relies on some degree of personally felt core affect, however fleetingly invoked in a laboratory setting. Rusell [10] has defined core affect as a neurophysiological state that is 'consciously accessible as a simple, nonreflective feeling that is an integral blend of hedonic and arousal values'. Insofar as ratings of impact were designed to quantify a genuine emotional response, it is noteworthy that negative body state reactions were prominent in the regression analysis of impact ratings. Arousal, on the other hand, was not. This indicates that the impact ratings are capturing at least some qualitative attributes of visceral, and potentially other bodily reactions that are missed by assessments of arousal in the current study. The observed involvement of negative body state reactions is not unexpected as it is consistent with theoretical work proposing that disgust is an emotion that evolved from a more primitive system involved in distaste [59]. Indeed, it has been suggested that disgust is often experienced as a visceral sensation, and that this is likely to be due to the processes of nausea, throat clenching, and food expulsion that are often triggered by this emotion [60]. Schnall and colleagues [60] further suggest that whereas emotions typically involve a physical and embodied component, this feature may be particularly pronounced for the emotion disgust. Though impact ratings were designed to assess the immediate emotional effects of the image upon the viewer rather than an intrinsic property of the image, these ratings are likely to incorporate and be influenced by attributes identified by appraisal theorists as important in the generation of emotional reactions and responses [41]. These include attributes such as the immediate significance or relevance of a stimulus for the self, and distinctiveness or incongruity. Relevance to the self relates to constructs such as core affect, discussed above, and the ''working model of the self'' [61] or ''schematic model of the self'' [62] discussed in relation to autobiographical recollection and Remember/Know performance in sad mood states, respectively. Personal relevance is known to contribute to flashbulb memory formation in the case of emotionally-charged public events [1], and influences whether an event is remembered rather than known [63]. Interestingly, though Ochsner [17] found a recollection advantage for negative relative to positive images matched for arousal, he noted that this advantage might have been lost had the two image sets been equated for 'personal relevance.' Furthermore, Adolphs [64] has argued that 'constructs such as impact and relevance' should form the basis for investigating individual differences in amygdala function. This interpretation accords with Ewbank's conclusion that amygdala function is determined not simply by arousal (or valence) alone but by an event's significance or relevance to the individual [43]. Further relevant to the current research is the contribution of distinctiveness to rated impact and estimates of recollection. Though the memory advantage for disgusting relative to fearful stimuli emerged despite these stimulus sets being matched for distinctiveness, after body state reactions, distinctiveness was a second predictor of impact ratings. As operationalised in our study, items can be distinctive on the basis of semantic through to perceptual levels of analysis. Thus, images having impact seem to involve an intersection between felt affect and more traditional cognitive attributes associated with what makes events distinct, such as rareness or incongruity with prior experience. In the literature examining memory for non-emotional and emotional items, distinctive items tend to be remembered or recalled more often than less distinctive items [36,37,38]. It is important to emphasise that distinctiveness is not equivalent to visual complexity, which has been used to match stimulus categories in previous studies of emotional memory [17,18] but did not emerge as a significant predictor of impact or recollection here. The present findings demonstrate that disgusting images may be better remembered than equally pleasant frightening ones that are matched for a range of emotional and non-emotional stimulus attributes, despite the disgusting images being less arousing. Neither arousal-nor valence-based accounts of emotional memory can readily account for this finding, with the construct of impact instead emerging as a strong predictor of the current emotional memory effects. Previous work has shown that impact influences the allocation of visual attention and the amygdala response to negative emotional images, yet important issues remain to be addressed. One obvious application of the present findings would be to assess memory for well-controlled stimuli that vary in rated impact, while another would be to assess whether impact's explanatory power extends to memory for highly positive images and stimuli in other modalities. The present contribution of rated body state, distinctiveness and (lack of) ideation to impact further suggests promise in clarification of the factor structure that underlies diverse ratings of image content. In the case of the bodily state reactions to high impact images, examination of psychophysiological indices should also be prioritised. It should be noted that a potential limitation of this research is that regression analyses were conducted using memory data and stimulus ratings collected from separate samples of participants. Memory was not tested in Experiment 2 because multiple viewings and ratings might have biased recollection even two weeks later. Though the current data leave open the issue of whether participants' own ratings would predict memory in a way similar to that described here, previous research has reported that for rated impact, consistency of ratings across participants is highly significant [43,44]. It would therefore not be unreasonable to expect a similar pattern of findings if both datasets had been drawn from the same sample of participants. Overall, our findings are consistent with the idea that the extant research on emotional memory has been constrained by its treatment of emotion as merely arousal. They demonstrate that a more complete understanding will only result from systematic consideration of the contributions made by categorical affective states (e.g. basic emotions) and other stimulus and event attributes linked to cognitive appraisal [7]. More specifically, the present research suggests a role for the appraised impact of emotional images. While the concept of impact has not been evaluated in experimental memory research, it has played a key role in artistic discourse. A carefully-crafted media photograph should be indelible over long periods, and in photojournalism, striking and eye-catching visual images are routinely referred to as images with impact [45]. Experiment 1 This research was approved by the University of Cambridge Psychology Research Ethics Committee and was conducted according to principles expressed in the Declaration of Helsinki. Participants. Participants gave informed written consent before participating. Thirty-two community volunteers (twenty-five females; mean age = 35.2, SD = 7.9) participated in exchange for a small honorarium. All had normal or corrected-to-normal vision. Stimuli. The stimulus set comprised 36 disgusting, 36 frightening, and 36 positive 72 dpi colour photographic images. Images were selected on the basis of ratings from a larger set of 208 images, the majority of which were taken from the International Affective Picture System (IAPS) [14]. All images had been rated previously for five basic emotions (disgust, fear, happiness, anger and sadness) and five additional variables (pleasantness, arousal, tendency to approach or avoid, distinctiveness, and visual complexity). The instructions had emphasized that participants should rate the images on the basis of their own personal reactions rather than how people in general should feel. The ratings on these other dimensions made use of well-established scales. The instructions for rating pleasantness and arousal were described according to the descriptions in the IAPS manual [14]; approach-avoidance was rated with endpoints very inclined to approach the scene to very inclined to avoid the scene; distinctiveness was rated on the basis of how rarely similar scenes are encountered, relative to other scenes or images, in everyday life; and visual complexity instructions were adapted from Ochsner [17]. More specifically, an image could be considered complex either because it had many simple objects that each had little detail, or a few objects that each had a lot of detail. Lower scores on the 9-point likert scales indicated unpleasant, low arousal, high approach, indistinctive, and low visual complexity image qualities. The mean ratings for the three image categories employed in Experiment 1 are given in Table 3. Mann-Whitney comparisons showed that the frightening images were rated as more frightening than the other two categories, the disgusting images as more disgusting, and the positive images as more pleasant. Frightening images were also rated as the most arousing, whereas disgusting and positive images did not differ on arousal ratings. Disgusting and frightening images were matched on the following other dimensions: pleasantness, approach-avoidance, distinctiveness, visual complexity, anger and sadness. The stimuli were further divided into two closely matched study sets (set one and set two) on the basis of their content and ratings, each consisting of 18 images belonging to each emotion category. Participants viewed one study set in the encoding session, while the other set served as foils for the recognition memory test. Sixteen mildly unpleasant filler images were also included to minimize primacy and recency effects. Design. The independent variable was image category (disgusting, frightening, and positive images; repeated measure). The experiment consisted of an encoding session and a test session separated by an interval of approximately two weeks (mean interval = 13.75 days, SD = 1.02). Half of the participants were shown study set one as targets while study set two served as foils in the test session; for the other half of participants the target and foil sets were reversed. The dependent variables in the test session were the proportions of remember, know and new responses made to targets and foils from each image category. These responses were converted into recollection and familiarity (fd9) estimates, using the equations shown in Appendix S1. Procedure. The images were presented on a VDU using PsyScope [65]. They were presented against a black background, and subtended a vertical visual angle of approximately 15u and a horizontal visual angle of approximately 13u. In the encoding session, participants were presented with 18 disgusting, 18 frightening, and 18 positive images from stimulus set one or set two. Images were shown in one of four pseudorandom orders for each stimulus set across three counterbalanced blocks, such that no more than three images from the same emotion category were shown consecutively. To minimize primacy and recency effects, eight filler images were presented at the beginning and end of the encoding session. On each trial, a fixation cross was presented for 500 ms, followed after a 500 ms inter-stimulus interval (ISI) by an image displayed for 5000 ms. Participants were instructed to rate each image for how much red it contained, and to do this from the perspective of a picture editor of a journal, because red did not reproduce well at the printers. This served to focus attention on each image, without requiring any in-depth processing of their emotional or semantic content. Participants were not aware that their memory would be tested when they returned approximately two weeks later for the test session. In the test session, participants were initially trained in the Remember/Know distinction, with a procedure adapted from Gardiner, Ramponi and Richardson-Klavehn [66]. They were asked to indicate using a button box whether the image had been presented in the encoding session and they could recollect details about the context in which it had been presented (''remember''), or whether the image seemed familiar but they could not recollect anything about the context in which the image had been presented at encoding (''know''), or whether the image had not been presented at encoding (''new''). Guessing was strongly discouraged. Participants viewed images from the study set intermixed with images from the image set that was not presented at encoding (foils). Each image remained on the screen until the participant made their response. Images were presented in four pseudorandom blocks whose order was counterbalanced across participants. Experiment 2 Participants, design, and procedure. A new group of 12 participants (8 women; mean age 31.50 years, SD = 6.37) rated the full set of 208 images from which the Experiment 1 images had been selected, for four variables -impact, negative body state reaction, positive body state reaction, and ideation. The images were presented on a VDU, in the same way as the encoding session of Experiment 1. The instructions emphasized that participants should rate the images on the basis of their own personal reactions rather than how they imagined people in general should feel. All participants first rated the images for their immediate impact following a short (500 ms) presentation on a scale that ranged from 1 (no impact) to 9 (intensive impact). Following Murphy et al. [44], participants were instructed to consider each picture as a whole and judge whether they felt the content of the image created an instant sense of impact on them personally. They were asked not to think in detail about the picture or its contents in terms of particular properties like the positive or negative feelings it might invoke (e.g. joy, anger, etc.), how distinctive the image was or how many thoughts and ideas it led to. The instructions for rating impact were as follows: ''In this experiment you will view a series of pictures with varying content. Each will be presented only for a very short amount of time -this is because we want you to rate each one for its immediate impact. By this we mean that before you get to think about what is in the picture you may be instantly affected by itwithout necessarily knowing why. We would like you to consider each picture as a whole. Just judge whether you feel the content of the image created an instant sense of impact on you personally. Try not to think in detail about the picture or its contents in terms of particular properties like the particular positive or negative feelings it might invoke in you (e.g. fear, anger, joy, etc.), how distinctive the image is or how many thoughts and ideas it leads to. We just want an estimate of its overall immediate impact, irrespective of what it is that might underlie its impact on you personally (i.e. whether it's positive, negative or neither). Remember, it is your own personal reaction we are interested in, not how you think people in general should feel. Just glance at the picture and make an 'instant' judgment.'' No participant reported difficulty with understanding the impact rating instructions, and importantly, a high level of agreement in impact ratings across participants has been reported previously [43]. Following the impact rating task, the images were viewed again, this time for 5000 ms each, in a different order, and each image was then rated for (1) negative body state reaction, (2) positive body state reaction, and (3) ideation (the number of elicited thoughts and ideas). The order of the scales was counterbalanced. The former two contrast with pleasantness ratings by emphasising genuine visceral and emotional feelings, as opposed to cold rational appraisals. For negative body state reaction, participants were asked to indicate the extent to which each scene caused a negative body state reaction such as an unpleasant feeling in the pit of their stomach, shivers up their spine, or hairs on the back of their neck standing on end. For positive body state reaction, participants were asked to indicate the extent to which each scene caused a positive body state reaction such as feeling warm inside, wanting to laugh out loud, or feeling so happy that they might cry. Ratings of negative and positive body state reactions were done on 9-point scales with endpoints 1 no negative/positive body state reaction to 9 very high negative/positive body state reaction. For ideation, participants were asked to rate how many thoughts and ideas came to mind for each image on a scale with endpoints (1 none at all and 9 a large number of ideas). Supporting Information Appendix S1 Equations used to calculate the two main dependent variables, Recollection scores and Familiarity scores.
v3-fos-license
2021-07-16T13:19:51.904Z
2021-07-15T00:00:00.000
235915214
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.dsm.2021.07.001", "pdf_hash": "28815f673b83f0bc542e6071c3ccffff33b54cce", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1246", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "a080c057e921bf6ef4ade8855780163c959865d3", "year": 2021 }
pes2o/s2orc
Improving Google Flu Trends for COVID-19 estimates using Weibo posts While incomplete non-medical data has been integrated into prediction models for epidemics, the accuracy and the generalizability of the data are difficult to guarantee. To comprehensively evaluate the ability and applicability of using social media data to predict the development of COVID-19, a new confirmed case prediction algorithm improving the Google Flu Trends algorithm is established, called Weibo COVID-19 Trends (WCT), based on the post dataset generated by all users in Wuhan on Sina Weibo. A genetic algorithm is designed to select the keyword set for filtering COVID-19 related posts. WCT can constantly outperform the highest average test score in the training set between daily new confirmed case counts and the prediction results. It remains to produce the best prediction results among other algorithms when the number of forecast days increases from one to eight days with the highest correlation score from 0.98 (P < 0.01) to 0.86 (P < 0.01) during all analysis period. Additionally, WCT effectively improves the Google Flu Trends algorithm's shortcoming of overestimating the epidemic peak value. This study offers a highly adaptive approach for feature engineering of third-party data in epidemic prediction, providing useful insights for the prediction of newly emerging infectious diseases at an early stage. Introduction Since the outbreak of COVID-19 (formally known as 2019-nCoV) in 2019 (Shen et al., 2020), the pandemic has become a major threat to the whole world. By May 30, 2021, the virus had affected more than 169 million people and caused the deaths of 3.5 million in more than 190 countries and regions worldwide (JHU, 2021). Although many measures have been taken to cope with the health emergency of national concern, such as social distancing measures, locking down measures, imposing quarantines, universities, and business closures (Tison et al., 2020), monitoring the dynamics of the epidemic and preventing its spread pose a huge challenge in practice due to the limited capacity of conventional disease surveillance systems. Studies have shown that publicly available data can play a crucial role in tracking the spread of epidemic disease as complements for conventional public health surveillance (Gundecha and Liu, 2012;Samaras et al., 2020). Non-medical data generated from various sources (Aiello et al., 2020;Kirian and Weintraub, 2010;Ram et al., 2015) has been widely used to estimate disease incidences and to detect disease outbreaks before clinically confirmed data is available (Charles-Smith et al., 2015;Dai et al., 2021;Lu et al., 2021). Social media data collected from Facebook (Gittelman et al., 2015;Strekalova, 2016), YouTube (Basch et al., 2015;Nerghes et al., 2018), and Instagram (Guidry et al., 2017;Seltzer et al., 2017), and Internet search queries (Ginsberg et al., 2009;Zhao et al., 2018) are also used to predict diseases for public health concerns. For example, Twitter data is widely used for early warning and outbreak detection, such as to predict syphilis (Young et al., 2018), swine flu (Kostkova et al., 2014), flu (Chen et al., 2014), and Ebola (Yom-Tov, 2015). The representative work was made by the Google research and development team, who developed the Google Flu Trends (GFT) algorithm based on the high correlation between the number of certain queries in the Google search platform and influenza-like activity level (Ginsberg et al., 2009). They accurately estimated the level of influenza activity in near-real time without knowing the development stage and transmission mechanism of the disease. Since then, many researchers are inspired to track epidemics with social media data (Araujo et al., 2017;Huang et al., 2013;Signorini et al., 2011). As for the unprecedented pandemic COVID-19, some researchers also applied social media and Internet data to monitor and estimate the development of the epidemic (Ayyoubzadeh et al., 2020;Li et al., 2020;Qin et al., 2020). However, many of these studies used only sampled, incomplete data, so the integrity of the dataset and the accuracy of the prediction models are both difficult to guarantee, and there is still a lack of a general prediction framework that can accurately predict the course of COVID-19 using social media data. To detect and predict the development of COVID-19 using publicly available social media data, this study applied the daily new confirmed COVID-19 case counts in Wuhan reported by its Health Commission, and a complete dataset of user posts from Sina Weibo (Weibo, 2020), a Twitter-like microblog platform in China, to propose a new confirmed case prediction algorithm named Weibo COVID-19 Trends (WCT) based on the GFT algorithm. WCT can effectively predict the daily new confirmed case counts before the official report is released. This study also provided a general prediction framework that can be easily extended to predict other diseases or public emergencies using accessible third-party data. This study provides a promising approach for forecasting newly emerging infectious diseases at an early stage when most epidemiological characteristics are unknown. Table 1 shows the nomenclatures used in each processing of this study. The main contributions of this study are summarized as follows: 1. A new confirmed case prediction algorithm is developed based on GFT to predict the development of COVID-19. 2. A genetic algorithm is designed to select a keyword set to filter Weibo posts related to COVID-19. 3. A highly adaptive framework for feature engineering which allows third parties to utilize the data for epidemic predictions is proposed. The rest of the paper is organized as follows. Section 2 reviews the GFT algorithm and its updated versions. Section 3 mainly describes the framework for the proposed COVID-19 prediction algorithm (i.e., WCT), in which a genetic algorithm is implemented to improve related keyword set selection. Section 4 presents the estimated results of WCT with a comparison with other algorithms including GFT. Finally, Section 5 summarizes the findings and limitations of this study. The initial version of GFT Google Flu Trends (GFT) is a short-term forecasting tool for weekly influenza activity as an auxiliary method of influenza surveillance (CDC, 2020). It was launched in 2008 with satisfying forecast precision at that time and was further applied to influenza surveillance and early warning systems in many countries (Butler, 2013). Although Google had improved the details of the algorithm many times in the process of GFT application, due to the impact of a sudden increase in influenza-like illness (ILI) related queries and other factors (Kandula and Shaman, 2019;Lazer et al., 2014b), the problem of inaccurate prediction of the algorithm has never been solved completely. Finally, Google shut down the GFT flu prediction function in 2015 (GFT, 2015). The most well-known GFT algorithm is its initial version. With input on the fraction of certain ILI-related search queries from Google and the percentages of ILI physician visits from the US Centers for Disease Control and Prevention (CDC), the GFT algorithm trains a log-odds linear regression model (LR) to estimate ILI incidence. LR uses the log-odds of an ILI physician visit and the log-odds of an ILI-related search query to realize regression prediction: where logitðpÞ ¼ lnðp =ð1 À pÞÞ, IðtÞ is the percentage of ILI physician visits, QðtÞ is the ILI-related query fraction at time t (i.e., the sum of each query fraction in the selected ILI-related search queries set), α is the multiplicative coefficient, and ε is the error term. Firstly, the model is trained by each of the 50 million candidate common queries separately. It outputs the prediction result of ILI physician visits and the Pearson correlation score between the estimates and the CDC ILI data. Then the aggregated top-scoring queries are used to train the model and the best fit (when the number of keywords n ¼ 45) is selected automatically. The selection of queries from the best fit is called "the greedy combination algorithm" (GCA). Finally, the selected queries are used to train the model and predict the ILI physician visits. This approach has successfully estimated the level of weekly influenza activity in the United States from 2007 to 2008 with a mean correlation score of 0.97 and 1-2 weeks ahead of the reports published by CDC. It offers the opportunity to use search queries to detect influenza epidemics and inspires researchers to explore the application of social media data in public health surveillance (Cui et al., 2015;Schmidt, 2012). Updated versions and developments Google officially launched GFT (GFT 1.0) in November 2008, and subsequently gained a wide range of popularity. However, in the first wave of influenza A (H1N1) epidemic, that is, from April to August 2009, the predicted incidence of H1N1 was badly lower than the ILI activity reported by CDC (Butler, 2013). Therefore, Google upgraded GFT for the first time and developed the second version GFT 2.0 (Cook et al., 2011). GFT 2.0 adjusted the number and category of selected search queries, referring to the ILI monitoring data during the first wave of H1N1 epidemic (March 29, 2009to September 13, 2009). It increased the search query terms and deleted search queries that were not directly related to influenza, which significantly improved the performance of GFT 2.0. Since its launch in September 2009, its prediction result had been very similar to the ILI activity in the United States until 2012. In the influenza epidemic season of 2012-2013, GFT 2.0 greatly overestimated the influenza epidemic with almost twice the result of CDC monitoring (Butler, 2013). This overestimation led to the second upgrade of GFT (Copeland et al., 2013). GFT 3.0 was officially launched in October 2013, and it made two changes based on GFT 2.0, that is, weakening the impact of abnormal media hot spots and using elastic net to predict ILI (previously based on linear regression). Compared with GFT 2.0, GFT 3.0 significantly reduced the peak amount of its predicted ILI in the 2012-2013 flu season. However, its predicted result was still slightly higher than that of CDC in the United States, and in the 31 weeks after the implementation of GFT 3.0, the prediction result was higher in 23 weeks (Lazer et al., 2014a). The last upgrade of GFT took place in August 2014 (Lampos et al., 2015). GFT 4.0 expanded the GFT 3.0 model by incorporating the queries selected by the Elastic Net into a non-linear regression framework, based on a composite Gaussian Process. It also injected the ILI activity data as prior knowledge about the disease into the model. The bias of GFT prediction was significantly reduced. GFT 4.0 was used until August 2015, when Google shut down the GFT prediction service. Because of the important role of ILI surveillance in public health, many researchers are still committed to improving the predictive performance of ILI, such as correcting the limitations of the GFT algorithm process, updating or adding the training data source of the prediction model, and proposing new prediction algorithms based on GFT. Kandula and Shaman (2019) proposed a corrected GFT algorithm, which uses the estimated value of the original GFT algorithm as new data for training the ILI prediction model, reducing the total prediction error by 44%. This algorithm considers the problem that the ILI data provided by CDC is not timely and incomplete when the GFT algorithm is proposed. It uses complete ILI data and GFT estimates to train the prediction model and replaces LR with an autoregressive integrated moving average (ARIMA) model. The algorithm greatly improves the prediction accuracy and proves the validity and practicability of the GFT prediction results. Similarly, other studies (Dugas et al., 2013;Preis and Moat, 2014;Santillana et al., 2015;Wagner et al., 2018) also found that replacing LR with other non-linear regression models and combining new data sources, including search queries, social media, and traditional data sources, into the prediction model can significantly improve the accuracy of ILI prediction. Data description Sina Weibo is a popular Chinese microblog platform with millions of users voluntarily sharing their lives and thoughts (Weibo, 2020). The considerable amount of post-data generated by so many users offers the possibility of monitoring and predicting the development of emerging infectious diseases. In this study, all posts made by Weibo users in Wuhan from December 1, 2019, to March 20, 2020, were collected. The dataset spans 111 days and contains the period before the COVID-19 outbreak and its evolution. The dataset contains 38,182,972 posts published publicly by 2,239,450 unique users. Each record of post data contains the post's content, type (whether the post was original or forwarded), time, user nickname, and corresponding encryption ID. If the post was forwarded, the post data contained the original post content (otherwise, it was blank), original time posted, original user nickname, and ID. During the data collection period, the mean number of daily unique users was over 117,000, and they generated more than 343,000 posts every day. On average, each user generated 2.9 posts per day. (Fig. 1c), and the Pearson correlation score is 0.89 (P < 0.01). The framework of weibo COVID-19 trends (WCT) Inspired by the high correlation score between the relative frequency of the certain keyword in Weibo posts and daily new confirmed case counts of COVID-19, a new confirmed case prediction algorithm named WCT based on GFT is proposed. The basic algorithm process of WCT and its comparison with GFT are shown in Fig. 2. Both of the two algorithms are trying to train a regression model to predict the case counts in which the evaluation indicator is the Pearson correlation score (R) between the prediction results and the real case counts. In WCT, GCA is replaced by the genetic algorithm (GA) (Mitchell, 1998) when selecting the keyword set for the best fit of the prediction model. After comparing the performance of different prediction models, the LR model in GFT is selected as the prediction model in WCT. GA for keyword set selection A prior list of 41 keywords (see Appendix Table A) is compiled firstly to select all posts that contain COVID-19 information, including the pneumonialike epidemic's medical terminology, symptom, and epidemic control measures and organizations. There are 4,761,010 related posts from a total of 38,182,972 posts from all users (12.47%). Next, the keywords from each post related to the pneumonia-like epidemic were extracted, and a list of 118,572 most commonly used keywords (see Appendix Table B) was produced. The most frequent 2,000 keywords were chosen based on the absolute frequency for the next analysis. The "absolute frequency" of a keyword is the total number of posts containing that keyword since the beginning of the statistical period. Next, the time series of the relative frequency of each commonly used keyword was obtained. The "relative frequency" of a keyword on a certain day refers to the number of all posts containing the keyword on that day divided by the number of unique users on that day. The relative frequency of a keyword set (KS), i.e., the sum of the relative frequency of each keyword in the selected KS, was used to train the case counts prediction model and then predict the development of the epidemic. The purpose of KS combination and selection is to find the most epidemic relevant keyword set (MKS) from the list of most commonly used keywords in Weibo posts. This paper aims to design a selection algorithm to seek the MKS which could obtain the highest R between the prediction results and the real case counts. Viewing the composition of a KS as analogous to an arrangement of chromosomes, GA is used to select the MKS. The fitness function of GA is to maximize R between the prediction results, yielded from the prediction model, and the real case counts. The process of GA is presented as follows: Step 1 KS initialization. The initial KS group is formed by M KSs, with each KS containing N keywords. Each KS is scored according to the fitness function to maximize R. Step 2 KS update. The new KS is formed through crossover, mutation, and combination of keywords in KS. Each iteration of the algorithm will choose M better KSs based on R for the next generation and the iteration repeats. Step 3 Stop criteria. When the maximum iteration time MG is reached or R is high enough, the algorithm will stop and the program will output the MKS. The flow chart of GA is shown in Fig. 3a. In the implementation process, parameters were set as M ¼ 25 and MG ¼ 100. Then the respective MKS was obtained with N varying from 1 to 50 while fixing the length of MKS (N ¼ 1 to 50), separately. To avoid over-fitting, the training period was set as from December 1, 2019, to January 29, 2020, and the test period was set from January 29, 2020, to February 22, 2020. To evaluate the advantages of GA, the MKS obtained by GCA in GFT was also analyzed. The detailed MKS selection results are presented in Section 4.2. LR for predicting the number of new confirmed cases In this section, LR model was applied to predict the number of new confirmed cases using the relative frequency of MKS obtained by GA and a historical case count sequence. The analysis period covers the complete development stage of COVID-19 in Wuhan except February 12 and 13, 2020, due to a change in the criteria for counting diagnoses of the virus. During that period, the number of new confirmed cases increased abnormally. The starting and ending times of the training set and the predicting set are from December 1, 2019, to February 21, 2020, and from February 22, 2020, to March 20, 2020, respectively. The case counts series were manually smoothed with a 3-day window length and then used as input data for prediction. There are also two parameters in the fitting process, the duration (D) of the training data and the lag (g) for prediction. For example, a prediction model trained with D ¼ 6, g ¼ 1 is shown in Fig. 3b. In this study, D ¼ 3 was set to ensure adequate training data in the training process, and g ¼ 1 was set to predict the next day's case counts using all information up to date. All training processes apply three-fold cross validation to reduce overfitting. The training and predicting processes are introduced as follows. Training process Model trained ¼ FIT m ðC t ; C tÀg ; C tÀgÀ1 ; :::; C tÀgÀDþ1 ; P tÀg ; P tÀgÀ1 ; :::; P tÀgÀDþ1 Þ where Model trained is the trained model, C t and P t are the case count and number of relative frequency of MKS at time t during the training period, respectively, FIT m is the fitting process by inputting training data {C t ; C tÀg ;C tÀgÀ1 ;:::;C tÀgÀDþ1 ;P tÀg ;P tÀgÀ1 ;:::;P tÀgÀDþ1 } to train Model trained . The length of the training window is D and the dimensions of training data is 2D þ 1. The whole training set is {C t ;C tÀg ;C tÀgÀ1 ;:::;C tÀgÀDþ1 ;P tÀg ;P tÀgÀ1 ; :::; P tÀgÀDþ1 } (t increases from 1). Predicting process C t ¼ Model trained ðC tÀgÀ1 ; C tÀgÀ2 ; :::; C tÀgÀDþ1 ; P tÀgÀ1 ; P tÀgÀ2 ; :::; P tÀgÀDþ1 Þ (3) where C t is the case count at time t during the predicting period. Historical data is input as {C tÀgÀ1 ; C tÀgÀ2 ; :::; C tÀgÀDþ1 ; P tÀgÀ1 ; P tÀgÀ2 ; :::; P tÀgÀDþ1 } into the trained model Model trained . Then the prediction result of the case count at time t is output. The length of the predicting window is D and the dimensions of predicting data is 2D. The whole predicting set is {C tÀgÀ1 ; C tÀgÀ2 ; :::; C tÀgÀDþ1 ; P tÀgÀ1 ; P tÀgÀ2 ; :::; P tÀgÀDþ1 } (t increases from 1). Previous research has demonstrated that non-linear regression models, such as the Gaussian Processes, Long Short-Term Memory (LSTM), can achieve great performance in COVID-19 tracking and prediction (Alakus and Turkoglu, 2020;Lampos et al., 2021). The performance of LSTM model was also calculated to be compared with LR model. A 4-layer LSTM model was designed with a dropout rate of 0.15. The loss function was mean square error (MSE) and the optimizer was Adam. The number of training epoch ¼ 100 and batch size ¼ 10. The detailed estimated results are provided in Section 4.3. Overview of COVID-19 related keywords and case counts To investigate the relationship between the frequency of COVID-19 related keywords and the number of new confirmed cases per day, the temporal evolution of the keywords with the number of new confirmed COVID-19 cases in Wuhan was analyzed in this section. The direct correlation Pearson score R between the relative frequency of the top 2,000 commonly used keywords in Weibo posts and the number of new confirmed cases each day during the whole statistical period was calculated. Most of the correlated keywords are related to the treatment of COVID-19 ('hospitalization', 'physical examination', 'patient', and so on), and a few are used to describe symptoms or conditions (such as 'breathing difficulties', 'cough'). The most correlated keywords are 'hospital beds' (R ¼ 0.84, P < 0.01) and 'Shu Hongbing' (R ¼ 0.78, P < 0.01). The R value, as well as the absolute frequency of the top ten most correlated and uncorrelated keywords, is listed in Appendix Table C. The evolution of the number of confirmed cases of COVID-19 and the relative frequency of the five most relevant keywords are shown in Fig. 4. It can be seen that the relative frequency of each keyword is very similar to the trend of the number of new confirmed cases, supporting the motivation of tracking COVID-19 with Weibo data. In contrast, the 10 keywords with the weakest correlation ('article', 'new product', '##', 'grandpa Li', 'concert', 'Trump', '19', 'Hubei Economy TV', '2019') were also analyzed. These keywords with low correlation scores have little to do with the symptoms or treatment of COVID-19. The R value of the selected MKS GA and GCA algorithm were both used to select MKS. By setting the length of MKS (N) to vary from 1 to 50 and applying LR and LSTM prediction model (D ¼ 3, g ¼ 1) into GA and GCA algorithm, the changes in the indicator R between the prediction results and the real case counts were compared to evaluate the performance of the MKS selection algorithm. Each prediction model adopted three-fold cross validation and then output the average test scores of the training set as R. The MKSs (1 N 50) with the highest R selected by each algorithm are presented in Table 2. The original Chinese text for keywords in each MKS are provided in Appendix Table D. Most keywords in MKS obtained by GA or GCA algorithm are medical terms directly related to 'isolation','CT',and 'coronavirus'). It also contains keywords which are not directly related to COVID-19, such as numbers ('14', '17') and personal pronouns ('you'). GA has the feature of retaining the most relevant keywords and automatically outputting MKS with the best performance. The keywords in MKS can be repeated if duplication can make the MKS perform better. It can be found that there are some duplicated keywords in the MKS of GA-related algorithms (see Table 2). This is because the KS with duplicated keywords performs best in the iteration process of GA and becomes MKS. Judging from the correlation between the relative frequency of MKS and the daily case count of COVID-19, the performance of GA and GCA is close, but from the R value of the MKS obtained by the two algorithms, GA is better than GCA. The highest test score is obtained by the GA&LR algorithm (WCT) with R ¼ 0.66 (p < 0.01), which is higher than the test score of GFT (i.e., GCA&LR) of R ¼ 0.62 (p < 0.01). In the four combination algorithms, GA&LR (WCT) has the best performance with the average test score R ¼ 0.65 (p < 0.01), while the average test score of GCA&LSTM is the smallest at R ¼ 0.43 (p < 0.01). The variation of R for MKS with different N is shown in Fig. 5a. Notably, GA-based predictions are much more stable than GCA. For GA&LR and GA&LSTM, the correlation scores vary in a very limited range, from 0.60 to 0.66 and from 0.55 to 0.62, respectively. However, for GCA-based predictions, the correlation scores experienced unexpected large variations. With GCA&LSTM generating the poorest prediction results, the correlation score of GCA&LR can drop to 0.21 when N ¼ 50. In a word, the MKS filtered by GA in terms of predicting daily new confirmed cases is with high agreements to the real data. In addition, the performances of MKSs filtered by GA and GCA (N from 1 to 50) were compared when the fitness function was to maximize the direct R between the relative frequency of the MKS and daily new confirmed case counts. The experimental results further evidenced the superiority of GA in selecting more relevant keyword sets, and it is not sensitive to the length of keywords N (see Figure D8 in Appendix). The prediction performance of WCT In this section, the relative frequency of the selected MKS and daily new confirmed case counts were applied to train prediction models and predict the case counts in the whole analysis period with D ¼ 3, g ¼ 1. For each prediction result, R values between the prediction results and the real case counts in the whole analysis period, the training set, and the predicting set, were calculated as the indicators of performance. Note that different from the three-fold cross validation technique used in the previous analysis, the whole data in the training set were used to construct all models in this section. The MKSs with the highest R selected by GA and GCA were used to train the LR and LSTM model, where the lengths of MKS in GCA&LR, GCA&LSTM, GA&LR, and GA&LSTM are N ¼ 35, 37, 44 and 25, respectively (see Table 2). The prediction results show that WCT (referred to GA&LR in Fig. 5b) has higher prediction accuracy than GFT (referred to GCA&LR in Fig. 5b). The performance of WCT is R ¼ 0.97 (p < 0.01) during the whole analysis period, all of which are the best among contrast models. While the performance of GFT is R ¼ 0.96 (p < 0.01). The performance in training set (R ¼ 0.98 (p < 0.01)) and predicting set (R ¼ 0.87 (p < 0.01)) of WCT are also the best among the four algorithms. Compared with GFT, which excessively estimated the daily new confirmed cases during the outbreak period (from February 4, 2020 to February 5, 2020) over 6%-8%, WCT breaks through this limitation and the prediction error is constrained with less than 100 cases (0-3%) (Fig. 6a). The combination of GA and LR effectively overcomes the GFT's Table 2 The keyword combination and performance of MKS selected by four algorithms. shortcoming of over-estimating the epidemic peak value. Besides, in either the training or testing process, WCT constantly outperforms other algorithms. In contrast, the LSTM model does not perform well in this special task. In both GA&LSTM or GCA&LSTM, the peak number of cases was underestimated by 80% maximumly, and in the late stage of the epidemic, LSTM models overestimated the number of new cases by 10%-60% from March 1, 2020 to March 22, 2020. Sensitivity analysis of WCT In this section, the performances of the WCT algorithm under different parameter combinations were tested to evaluate the effect of duration of the training data (D) as well as the lag for prediction (g). The parameter D is set to change from 1 to 7, implying that the length of the training window increased from one day to a week before the days to be predicted. The parameter g is set to change from 1 to 15, implying that the algorithm attempts to predict the number of daily new confirmed cases on the gth day in the future. The length of MKS when it produces the best performance in the three-fold cross validation for each algorithm is used in this analysis (see Table 2). Fig. 7 shows the performance of the four algorithms. The four algorithms all show robustness to the parameter D, especially when g is set in the range of 1-3. When the number of days of historical data used for prediction (D) increases from 1 to 7, the performances of the four algorithms are rather robust, in comparison to the large variation of R in terms of the lag parameter g. Overall, there is a weak tendency of increased performance with larger D, i.e., the prediction model works better when more historical data are included in the training process. When g is small for more recent predictions, the WCT model continues to produce the best result given D is in the range of 2-5. For example, when the algorithm extends the prediction from the next day (g ¼ 1) to the second day (g ¼ 2) with D ¼ 3, the performance of WCT reaches R ¼ 0.97 to R ¼ 0.96, while the R values of GFT are only 0.96 and 0.93, respectively. When g increases from 10 to 12 with a week's historical data being trained (D ¼ 7), the R value of WCT varies in the range of 0.71 to 0.59. On the other hand, GFT only has the R value of 0.59 to 0.51. The four algorithms all show sensitivity to the parameter g. As the number of days to predict cases in advance increases, it becomes more difficult for the model to predict the future based on existing data. Compared to the GCA-based algorithms (GFT and GCA&LSTM), GAbased algorithms (WCT and GA&LSTM) show less sensitivity to changes in the g parameter. For example, WCT can still have a great performance as R ¼ 0.88 (D ¼ 6) when g ¼ 7, while the maximum R of GFT is only 0.78 (D ¼ 7). From the comparison of the prediction effect based on the LR model and the LSTM model, the LSTM model is less sensitive to the g parameter and can still maintain a good performance when g increases. WCT remains to produce the best prediction results among other algorithms when the number of forecast days increases from one to eight days with the highest correlation score from 0.98 (p < 0.01) to 0.86 (p < 0.01). However, when g increases to 15, GA&LSTM model can maintain high R as 0.67 (D ¼ 7), while WCT is R ¼ 0.49, D ¼ 7. Some studies have applied social media dataset to predict new confirmed cases of COVID-19. Qin et al. (2020) used the Baidu search index to predict new confirmed case counts with the performance of R ¼ 0.99 for g ¼ 1. However, this model is of limited practical value as it was not tested for longer term predictions; on the other hand, the WCT can predict case counts in 1-8 days' future with a high R ¼ 0.86-0.98. Lampos et al. (2021) designed an unsupervised prediction model using Google Trends data, which can predict newly confirmed case counts with R ¼ 0.83-0.85, ahead of official reports in more than 16 days. However, this model relies on manual construction of keyword set of Google Trends, which is highly subjective. While WCT utilizes GA to select MKS automatically and heuristically, with little human intervention in the MKS selection process. Ayyoubzadeh et al. (2020) also used Google Trends data to predict newly confirmed case counts in Iran. Comparing linear model and LSTM model, they found that the performance of linear model is better than the LSTM model, which is consistent with the conclusion of this study. From the above comparison results of sensitivity analysis, it is clear that the WCT method exhibits relatively stronger robustness to the parameters D and g. It produces the highest correlation scores with short future predictions and can maintain relatively more stable performance for longer future estimates. Conclusion and discussion In this study, an algorithm called WCT is proposed to predict new confirmed cases of COVID-19. Inputting the number of historical case counts and a comprehensive dataset of Sina Weibo posts by Wuhan users, the number of daily new confirmed cases can be accurately predicted by WCT. This paper applied a genetic algorithm to automatically construct the keyword set and it can consistently outperform the maximum average test score in the training set, higher than that obtained by GCA (0.66 vs. 0.62). The genetic algorithm is more relevant and more stable than GCA in terms of the Pearson correlation score between the prediction results and the real case counts. The relative frequency of related posts filtered by the selected keyword set is then applied to the LR algorithm and obtained the estimates with a high correlation score of 0.97 (p < 0.01) in the whole analysis period one day ahead of the official reports. WCT can accurately predict the development of COVID-19 using only the historical number of cases combined with Weibo post data. Compared with GFT, WCT overcomes the GFT's shortcoming of over-estimating the epidemic peak value. However, since the development of public emergencies on social media is dynamic, one limitation of the WCT model is that it may need to continuously update the keyword set for future situations with the development of public emergencies, to ensure accurate prediction in the later stage of epidemic or other public emergencies, which makes the application of the method challenging. Compared with the prediction results of the classical GFT model, considering the influence of noise and other factors, the prediction accuracy of the WCT model in short-term estimates needs to be further improved. This study offers a promising approach of using Sina Weibo data or other social media data to realize syndromic surveillance-based disease prediction and to increase global awareness of events. It provides a process for mining epidemic development trends from large-scale social media data without too many manual parameters. In the future, the use of WCT can be extended to monitor and track other diseases or public emergencies by inputting social media data. Declaration of competing interest The authors declare that there are no conflicts of interest.
v3-fos-license
2021-09-25T16:10:42.484Z
2021-08-24T00:00:00.000
262634520
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4425/12/9/1302/pdf?version=1629889113", "pdf_hash": "d9bbedf3036312c91d2a980a0b4f5043781292e6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1247", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "e22efb3af7633f9b176836fae3db2b07baf34c66", "year": 2021 }
pes2o/s2orc
Modulation of Cellular MicroRNA by HIV-1 in Burkitt Lymphoma Cells—A Pathway to Promoting Oncogenesis Viruses and viral components have been shown to manipulate the expression of host microRNAs (miRNAs) to their advantage, and in some cases to play essential roles in cancer pathogenesis. Burkitt lymphoma (BL), a highly aggressive B-cell derived cancer, is significantly over-represented among people infected with HIV. This study adds to accumulating evidence demonstrating that the virus plays a direct role in promoting oncogenesis. A custom miRNA PCR was used to identify 32 miRNAs that were differently expressed in Burkitt lymphoma cells exposed to HIV-1, with a majority of these being associated with oncogenic processes. Of those, hsa-miR-200c-3p, a miRNA that plays a crucial role in cancer cell migration, was found to be significantly downregulated in both the array and in single-tube validation assays. Using an in vitro transwell system we found that this downregulation correlated with significantly enhanced migration of BL cells exposed to HIV-1. Furthermore, the expression of the ZEB1 and ZEB2 transcription factors, which are promotors of tumour invasion and metastasis, and which are direct targets of hsa-miR-200c-3p, were found to be enhanced in these cells. This study therefore identifies novel miRNAs as role players in the development of HIV-associated BL, with one of these miRNAs, hsa-miR-200c-3p, being a candidate for further clinical studies as a potential biomarker for prognosis in patients with Burkitt lymphoma, who are HIV positive. Introduction MicroRNAs (miRNAs) are a class of small noncoding RNA molecules that play an essential role in gene regulation post-transcriptionally [1,2]. In their mature form, miRNAs are approximately 18-22 nucleotides long and are incorporated into the RNA-induced silencing complex (RISC) where they can act as guides, leading the complex to the 3 -UTR region of messenger RNA (mRNA). Upon complementary binding to the target mRNA, it is either targeted for degradation or translation is inhibited [3,4]. miRNAs are critical for normal cellular function and what has emerged clearly in the last decade is that aberrant miRNA expression is associated with many diseases, including malignancies. This is likely a result of amplification and/or deletion of specific genomic regions, with the deregulated miRNA having either an oncogenic or tumour suppressor role, affecting one or several of the cancer hallmark events [5,6]. Non-Hodgkin lymphomas (NHLs) represent a heterogeneous group of malignancies, which originate from lymphoid haematopoietic tissue, with the majority being of B-cell origin [7]. Quite early on in the acquired immune deficiency syndrome (AIDS) epidemic, an association between NHLs and AIDS was established, leading to the current classification of several NHL subtypes as being associated with HIV infection [8]. This includes Burkitt lymphoma (BL), a highly aggressive B-cell NHL characterized by the genetic hallmark c-MYC rearrangement with the immunoglobulin gene loci, and t(8;14) (q24;q32) being Burkitt lymphoma cell lines Ramos and BL41 were cultured in RPMI 1640 medium (Sigma-Aldrich, Saint Louis, MO, USA) supplemented with 10% FBS and 1% Penicillin/ streptomycin (P/S). The cells were maintained at 37 • C in a humidified incubator supplemented with 5% CO 2 . Cells were exposed extracellularly with aldrithiol-2-inactivated HIV-1 virions (HIV-1 AT-2) at 500 ng/mL or matched microvesicle (MV) control, for 3 h (virions and MV controls were a kind donation of Professor Jeff Lifson, AIDS and Cancer Virus Program, Frederick National Laboratory USA) [24]. miRNA Isolation and PCR Array miRNA was isolated using the mirVana TM miRNA Isolation kit (Thermo Fisher Scientific TM , Waltham, MA, USA) and quantified using the NanoDrop ND-1000 Spectrophotometer (Thermo Fisher Scientific TM , Waltham, MA, USA) and the Qubit TM RNA Assay kit (Invitrogen, Waltham, MA, USA) as per the manufacturer's protocol. Nucleotide integrity was analysed using gel electrophoresis. miRNA profiling was performed using an Applied Biosystems TM , custom 192a TaqMan ® Quantitative real-time PCR low density array (TLDA) card (#4346802). The 192a-card format was used, and each array card contained mature sequences of 188 miRNAs and three controls (RNU6B, RNU48, RNU44) pre-spotted in duplicate on a 384-well plate array. The RNA isolated from HIV-1 AT-2 and microvesicle treated cells (1 µg per sample) was converted to cDNA using a custom pool of multiplex stem-loop primers and the TaqMan ® miRNA Reverse Transcription kit (Thermo Fisher Scientific TM , Waltham, MA, USA), according to the manufacturer's instructions. The cDNA samples were loaded onto the custom TaqMan ® Quantitative real-time PCR low density array (TLDA) and target amplification was performed using the TaqMan ® Universal PCR Master Mix kit (No AmpErase ® , UNG 2X) (Thermo Fisher Scientific TM , Waltham, MA, USA) using specific primers and probes on the TaqMan ® miRNA microarray. PCR array and data analysis were performed at the Centre for Proteomic and Genomic Research (CPGR, Cape Town, South Africa) using their Applied Biosystems TM 7900HT Real-Time PCR system (Applied Biosystems TM , Carlsbad, CA, USA). miRNA Target and Pathway Analyses The bioinformatic predictive tools TargetScan [25], DIANA TarBase [26] and miRDB [27,28] were used to identify gene targets. A list of the top 20 predicted gene targets from each bioinformatic tool was compiled for each miRNA. The Venn diagram creation tool InteractiVenn [29] was used to develop Venn diagrams for the two sets of differentially expressed miRNAs and to identify common gene targets. To identify relevant biological processes and pathways downstream of miRNA gene targets, the database for annotation, visualisation and integrated discovery (DAVID) bioinformatic tool was used [30]. Annotation of enriched biological processes and KEGG pathways downstream of target genes was restricted to those with p values of ≤0.05. RNA Isolation and miRNA Single-Tube TAQMAN ® qPCR Assays To validate differentially expressed miR-200c-3p, single-tube TaqMan ® miRNA assays for hsa-miR-200c-3p and endogenous controls (RNU48, RNU6B) (Applied Biosystems TM , Carlsbad, CA, USA) were performed. Total RNA was isolated from treated cells and reverse transcription performed (10 ng per sample) using specific stem-loop primers and the TaqMan ® miRNA Reverse Transcription kit (Applied Biosystems TM , Carlsbad, CA, USA). This was followed by qPCR using specific primer pairs (hsa-miR-200c-3p and controls) and the TaqMan ® Universal PCR Master Mix kit (No AmpErase ® , UNG 2X) (Applied Biosystems TM , Carlsbad, CA, USA). The qPCR and analysis were performed using the Roto-Gene Q 2356 (Qiagen, Hilden, Germany). The delta CT method was used to analyse the expression of the genes of interest relative to the internal control in each of the samples. Comparison was made between the HIV-treated and control MV-treated cells using the fold-change(2 −∆∆CT ), where the control group was set to 1. cDNA Synthesis and qPCR for ZEB1 and ZEB2 Reverse transcription was performed using the iScript TM cDNA Synthesis Kit (Bio Rad, Hercules, CA, USA) according to the manufacturer's recommendations and the cDNA was used as a template for quantitative PCR using the KAPA SYBR ® FAST qPCR Kit (Kapa Biosystems, Western Cape, South Africa). The primer sets used for amplification were GAPDH, forward 5 -GAAGGCTGGGGCTCATTT-3 , reverse-5 -CAGGAGGCATTGCTGA TGAT-3 ; ZEB1, forward 5 -GCCTGAAATCCTCTCTGAATG-3 , reverse 5 -CACCTCTTGTC AAAC-3 ; ZEB2, forward 5 -GAAGAGACTGGAGATCACTC-3 and reverse 5 -GCCATCTT CCATATTGTC-3 . The expression of GAPDH was used as the internal control. Comparison was made between the HIV-treated and control MV-treated cells using the foldchange(2 −∆∆CT ), where the control group was set to 1. Transwell Migration Assay Cell migration was measured using a migration assay 2-chamber system (Transwell ® migration assays, Corning, NY, USA). Briefly, medium supplemented with 10% FBS was added to the bottom chamber and the Transwell ® chambers (8 µm pore size) were placed on top, into which cells were seeded in low-serum medium (0.5% FBS). Migration was allowed to proceed for 24 h. The cells in the upper side of the chamber were carefully removed. The migrated cells on the bottom of the membrane were fixed using 100% methanol and stained using 0.2% crystal violet, air-dried and thereafter solubilized in 50% acetic acid. The absorbance was read at 595 nm. Statistical Analyses For the miRNA PCR array data, the SDS output file (output format from the Applied Biosystem's qRT-PCR instrument ABI7900HT) was converted to plain text using Applied Biosystem's RQ Manager (version 1.2). Bioconductor's HTqPCR package (Dvinge & Bertone, 2009) was used in R (R Development Core Team, 2013) to analyse the qRT-PCR data [31]. Each amplification plot was viewed using RQ Manager (Applied Biosystems TM , Carlsbad, CA, USA) whereby the baseline and threshold values were set manually; failed replicates were excluded and only probes with two or more replicates were retained. The data were then exported into the DataAssist TM software (version 3.01) (Applied Biosystems TM , Carlsbad, CA, USA) to generate the Ct values for each replicate. Ct values between 30 and 37 were retained and the median value was calculated. The data were normalised using the geometric mean method [32]. The delta Ct method (2 −∆∆Ct ) was used to determine the fold change in expression of miRNAs and those that exhibited a fold change of two or more (FDR adjusted p-value ≤ 0.06) were selected for further analysis. Student's t-test (two-tailed) was used to test for significance between the HIV and MV-treated samples. For miRNA validation and all other qPCR data, the RotorGene Q software was used to analyse and determine the Ct values. Student's t-test (two-tailed) was used for comparison of the normalised data between the HIV-treated and MV-treated groups. All normally distributed data are presented as means ± SEM and significance determined using the two-sample t-test (Microsoft Excel for Office 365 or GraphPad Prism version 8). The latter was applied to the single-tube miRNA qPCR assays, qPCR assays for ZEB1 and ZEB2, and the migration assays. Exposure to HIV-1 Leads to Significant Changes in the miRNA Profile of Burkitt Lymphoma Cells We designed a custom microarray based on the most common miRNAs reported to be deregulated in diffuse large B-cell lymphoma and Burkitt lymphoma (Supplementary Tables S1-S3). Among people living with HIV, these cancers represent the two most prevalent non-Hodgkin lymphomas (NHLs) within this population group. However, the deregulation of these miRNAs within the context of HIV remains unknown. We thus performed a differential screening, using this custom-designed array to assess changes in the expression landscape of these miRNAs within an HIV-positive context. Cells derived from the Burkitt lymphoma cell line Ramos were exposed to HIV-1 AT-2, and the miR-NAome was assessed and compared to control cells (exposed to matched microvesicles). HIV-1 virions treated with Alrithiol-2 (AT-2), a mild oxidising agent, lose the ability to infect cells as a result of loss of sulphide bonds between the cysteine residues of the nucleocapsid proteins [24], but the structural integrity of the glycoproteins on the surface of the virions remains unaffected, ensuring that they retain the ability to interact with cell surface receptors. Following exposure for 3 h, thirty-two (32) miRNAs were found to be differentially expressed, by 2-fold or more (with a threshold of significance of p ≤ 0.06), in HIV exposed cells when compared to control cells (Figure 1a). This therefore indicates that the pathobiology of HIV-associated NHLs provides a cellular microenvironment that alters miRNA pathways in a way that is distinct from one where HIV is not present. We next sought to better define the role of these miRNAs in order to understand how they contribute to the HIV-associated NHL cancer phenotype. Using three independent predictive bioinformatics tools (TargetScan [25], DIANA TarBase [26] and miRDB [27,28]), the top 20 genes potentially targeted by each of the 32 miRNAs were identified and analysed, revealing 13 and 23 genes commonly targeted among all three databases, for upregulated and downregulated miRNAs, respectively ( Figure 1b and Table 1). The analysis of biological processes and pathways associated with these 36 genes reveals associations with a variety of cellular processes and pathways, but notably with B-cell differentiation, cell cycling, proliferation, DNA damage and drug responses, and several typical KEGG cancer pathways ( Figure 2). For instance, miR-222-3p, which we found to be upregulated by~7-fold in our array, has previously been shown to be downregulated in BL relative to DLBCL [33], and specifically targets the cyclin-dependent kinase inhibitor p27 Kip1 [34]. Within an HIV-positive environment, an upregulation of this miRNA would therefore translate to enhanced cellular proliferation, a pertinent feature of these aggressive HIV-associated cancers. Other miRNAs with known roles in cancer-promoting processes identified in the array include miR-575, miR-363-3p and several others. Of particular interest was hsa-miRNA-200c-3p, which, in addition to having a high association with numerous cancer types, has previously been reported to be downregulated in DLBCL [35]. Table 1. List of commonly predicted gene targets of upregulated and downregulated miRNAs in BL cells exposed to HIV-1. Hsa-miR-200c-3p Is Significantly Downregulated in HIV-1 Treated Burkitt Lymphoma Cells, and This Is Associated with Enhanced Migration We found the expression of hsa-miR-200c-3p to be downregulated by greater than 6.67-fold in the array. The validity of this observation was strengthened when single-tube miRNA assays showed that miR-200c-3p was indeed significantly downregulated in the Ramos cells (2-fold), as well as in a second BL cell line, BL41 (2-fold), when these cells were exposed to HIV-1, relative to controls (Figure 3a,b). The miRNA-200 family, which Hsa-miR-200c-3p Is Significantly Downregulated in HIV-1 Treated Burkitt Lymphoma Cells, and This Is Associated with Enhanced Migration We found the expression of hsa-miR-200c-3p to be downregulated by greater than 6.67-fold in the array. The validity of this observation was strengthened when single-tube miRNA assays showed that miR-200c-3p was indeed significantly downregulated in the Ramos cells (2-fold), as well as in a second BL cell line, BL41 (2-fold), when these cells were exposed to HIV-1, relative to controls (Figure 3a,b). The miRNA-200 family, which is highly conserved among vertebrates, has been shown to play a key role in cancer, from cancer initiation to metastasis [22]. Not only has it been shown to be downregulated in B-cell lymphomas, but also in cancers of the breast, lung, oesophagus, stomach, colon and many others. Importantly, the use of this miRNA as a prognostic marker looks promising, showing a favourable positive predictive value when evaluated in the plasma levels of cancer patients. Although reported to be involved in a variety of cancer types and cellular processes (Table 2), the miRNA-200 family is particularly associated with inhibition of the epithelial-to-mesenchymal transition, an early step in metastasis, by maintaining the epithelial phenotype through directly targeting the transcriptional repressors [36]. In lymphoma, the role of miRNA-200c remains unclear, with reports of both upregulation and downregulation of this microRNA [37,38]. In order to ascertain whether this downregulation was physiologically relevant, the migratory ability of the BL cells when exposed to HIV-1 was investigated. A validated in vitro assay was used, consisting of a two-chamber system separated by a porous membrane, and differential chemoattractant (FBS) in the two chambers. The extent of cellular migration was determined 24 h post-treatment. Indeed, both Ramos and BL41 cells displayed significantly enhanced migratory abilities in the presence of HIV-1, compared to control cells. The migration rates increased by 32% and 37% for Ramos and BL41 cells, respectively (Figure 3c,d). in BL41 cells exposed to HIV-1 AT-2 compared to control microvesicle-exposed cells. The cells were treated with either HIV-1 AT-2 or microvesicles and thereafter RNA was isolated. TaqMan ® single-tube miRNA assays were used for RT-qPCR and the delta Ct (2 −ΔΔCt ) method was used for quantification. (c) Fold change in migration in Ramos cells and (d) in BL41 cells exposed to HIV-1 AT-2 compared to control microvesicleexposed cells. Cells were treated as described above and Transwell ® migration assays were used to measure migratory in BL41 cells exposed to HIV-1 AT-2 compared to control microvesicle-exposed cells. The cells were treated with either HIV-1 AT-2 or microvesicles and thereafter RNA was isolated. TaqMan ® single-tube miRNA assays were used for RT-qPCR and the delta Ct (2 −∆∆Ct ) method was used for quantification. (c) Fold change in migration in Ramos cells and (d) in BL41 cells exposed to HIV-1 AT-2 compared to control microvesicleexposed cells. Cells were treated as described above and Transwell ® migration assays were used to measure migratory ability. The treated cells were plated in low-serum medium (0.5% FBS) on the top chamber, and allowed to migrate to the bottom nutrient-rich (10% FBS) medium. Migrated cells were stained and absorbance readings (correlating to the number of cells) were taken. The data were normalised to the total number of plated cells. Student's t-test was performed to determine statistical significance. (* p < 0.05, ** p < 0.01, *** p < 0.001) and error bars represent standard deviation. MiR-200c-3p Downregulation and Enhanced Migration Correlates with Over-Expression of ZEB1 and ZEB2 Proteins in BL Cells The Zinc Finger E-box Binding (ZEB) family of transcription factors has been experimentally confirmed in numerous studies to be targeted by miR-200c [39,49]. These proteins have been described as master regulators of epithelial-to-mesenchymal transition (EMT), through their ability to regulate genes involved in cell plasticity, intercellular adhesions and degradation of the extracellular matrix [50]. ZEB1 was one of the top 20 targets, from three databases, found to be potentially upregulated in our array (Table 1). At the mRNA level, we found ZEB1 to be significantly downregulated in both Ramos (1.75-fold) and BL41 cells (1.30-fold), when exposed to HIV-1 (Figure 4a,b). Conversely, at the protein level, there was an increase in ZEB1 protein expression in both the Ramos (2.75-fold) and BL41 (1.33-fold) cells (Figure 4c,d). A very similar pattern was observed when the expression of ZEB2 was investigated. The expression of the ZEB2 mRNA was significantly reduced in both cell lines upon exposure to HIV-1 (Figure 5a,b), with a decrease of 2.04-fold in Ramos cells, and of 1.30-fold in BL41 cells. As for ZEB1, the expression of the ZEB2 protein was enhanced in both the Ramos cells (1.99-fold), as well as the BL41 cells (2.78-fold) upon exposure to HIV-1 (Figure 5c,d). 1 Figure 4. HIV-1 AT-2 deregulates expression of ZEB1 in BL cells. The cells were treated with either HIV-1 AT-2 or microvesicles and thereafter RNA (a,b) or protein (c,d) was isolated. mRNA expression of ZEB1 in Ramos (a) and BL41 (b) cells treated with HIV-1 AT-2 as determined by RT-qPCR. Protein expression of ZEB1 in Ramos (c) and BL41 (d) cells, as determined by Western blotting, using p38 as loading control. For (a,b), the delta Ct (2 −∆∆Ct ) method was used and Student's t-test was performed to determine statistical significance. (* p < 0.05, *** p < 0.001) and the error bars represent standard deviation. For (a,b), the delta Ct (2 −∆∆Ct ) method was used for quantification and Student's t-test was performed to determine statistical significance. (** p < 0.01, *** p < 0.001) and the error bars represent standard deviation. Discussion People living with HIV are at increased risk of developing cancer, with non-Hodgkin lymphoma being one of the most prevalent cancers within this group [8]. While traditionally this enhanced risk was associated with HIV-1-induced immune suppression and exhaustion, as well as chronic B-cell activation, the advent of antiretroviral therapy (ART), even at early stages of infection, did not abolish this risk [51]. There is now enough experimental evidence to support an oncogenic role for HIV-1 and its antigens in carcinogenesis [52]. HIV-1 does not infect B lymphocytes; however, the virus is capable of binding these cells through cell surface receptors [20], and so can components of the virus, as has been demonstrated by binding of the p17 matrix protein to the CXCR receptors [53]. Whether through cell surface signalling, or via internalization, HIV-1 has the ability to alter cellular processes at multiple levels. In the current study, using a custom array design based on frequently reported altered miRNAs in the two most prevalent HIV-associated NHLs, we identified 32 miRNAs that were differentially expressed, out of 188 selected, in BL cells exposed to HIV-1, relative to controls. To the best of our knowledge, this is the first study to report on differentially expressed miRNAs in Burkitt lymphoma cells exposed to HIV-1. The relationship between HIV-1 and cellular miRNAs is well described. The virus and components of the virus have previously been reported to alter expression of cellular miRNAs in other cellular contexts. In a recent study, the T-cell lymphoblastic lymphoma SupT1 cell line showed alteration of several cellular miRNAs upon infection with HIV [54]. In fact, the alteration of host miRNA networks in CD4+ cells seems to be crucial for successful viral invasion and latency. Within the context of HIV non-host cells, there are a few reports of alterations in miRNAs due to the presence of the virus or its antigens. For instance, in Kaposi's sarcoma (KS), the HIV-1 Tat protein has been shown to synergize with the KSHV oncogene Orf-K1 to induce miR-891a-5p, modulating NF-kB [55]. There has not, as yet, been any comprehensive study on large-/medium-scale differential miRNA expression in NHLs comparing HIV-positive with HIV-negative. An earlier study conducted on a cohort in Kenya measured the expression of only a selected number of miRNAs, linked to the regulation of DNA methyltransferase (DNMT), from HIV-related NHLS (formalin-fixed paraffin embedded tumours) and compared that to expression in HIV-negative controls [56]. MiRNA in silico prediction analyses allowed for the identification of 36 putative targets (13 potentially downregulated, and 23 potentially upregulated). The miRNA interactome is complex, with single miRNAs shown to be able to target dozens of genes, and this therefore hinders straightforward interpretation of differences in miRNA expression. Importantly, a majority of the predicted targets identified from our miRNA array, and their associated biological processes, were found to be associated with cancer hallmarks with a high degree of confidence. Nevertheless, experimental validation is essential to assess true physiological impact, and thus, using single-tube miRNA assays, hsa-miR-200c-3p was confirmed as significantly downregulated in BL cells exposed to HIV-1. This downregulation was strongly associated with enhanced cellular migration, a physiological function linked to this miRNA (36). The role and significance of miR-200c-3p in BL development has not been clearly defined. In many cancers such as breast, ovarian and endometrial cancers, miR-200c-3p has been identified to have a tumour suppressor role [41,48,57]. The miR-200 family cluster of miRNAs have been termed "the guardians of the epithelial phenotype", as they have been found to be enriched in epithelial tissues and epigenetically silenced in mesenchymal tissues [42,58]. Along with other miRNA, members of the miR-200 family have been shown to be a marker of the epithelial phenotype since miR-200c-3p targets numerous mesenchymal genes and inhibits tumour cell migration and invasion [59]. ZEB1 and ZEB2 are two mesenchymal genes that are direct targets of miR-200c-3p [36,39,48,60] and we thus sought to find a correlation between them within our model system. We found the mRNA expression of both ZEB1 and ZEB2 to be significantly downregulated in response to HIV-1 exposure in both cell lines (Figure 4a,b and Figure 5a,b). Although an inverse correlation is expected, assessment of changes in the transcription level of miRNA targets can be misleading. This is because miRNA may bind to mRNA and prevent translation, with no change in mRNA levels, and this has been demonstrated in several studies [61]. We speculate that the downregulation of ZEB transcription observed upon HIV-1 exposure in our study results from a different mechanism and could represent an attempt by the cellular machinery to mitigate the deleterious effects of this highly oncogenic factor. ZEB is known to be regulated at multiple levels through a complex web of intracellular signalling pathways and it would be difficult to pinpoint the exact mechanism without a global view of the regulatory landscape within B cells in an HIVpositive background, and this supports the need for more research in this field [62]. It is, however, important to note that ZEB transcription is not abrogated when exposed to HIV-1, and sufficient mRNA is still being produced to lead to enhanced protein expression. Interestingly, there is a double-negative feedback loop that exists between the ZEB transcription factors and the miR-200 family, where these proteins can directly bind to and inhibit the expression genes encoding for the miR-200 family [63,64]. This feedback loop is advantageous to cells, as it allows for easy and reversible switching between epithelial and mesenchymal characteristics, depending on extracellular signals [65]. Contrary to the mRNA, the expression of ZEB1 and ZEB2 protein was upregulated in Burkitt lymphoma cells upon exposure to HIV-1, supporting the notion that the enhanced migration of BL cells via downregulation of hsa-miR-200c-3p leads to an alleviation of active repression of the ZEB proteins. Although still poorly understood, it is important to note that the mechanisms driving malignant cell migration are highly complex and involve several coordinated events including development of cytoplasmic protrusions, changes in cellular adhesion and traction, expression of proteolytic enzymes and more [66]. The ZEB proteins are part of a group of transcription factors (including Snail, Slug, KLF8 and others) that regulate this process, and while they have been shown to be specifically involved with cell plasticity, for instance through their ability to repress the adhesion molecule E-cadherin, they contribute to several other cellular events that promote cancer, such as enhancing cell cycling and acquisition of drug resistance [64]. Similarly, although the role of hsa-miR-200c-3p in cancer cell migration is well described, several other miRNAs have been identified to play pertinent roles in this specific function [67,68]. It is therefore clear that an array of factors and mechanisms is needed to drive cancer cell migration, and that in this particular study, while the hsa-miR-200c-3p/ZEB axis has been identified as a role player, it is as yet an association that needs to be confirmed via further loss-and gain-of-function analyses. This study therefore contributes to accumulating evidence that HIV-1 can directly promote oncogenic pathways in B-cell lymphoma, and clinical studies should be conducted to evaluate the use of hsa-miR-220c-3p as a potential novel biomarker that can be used for prognosis in patients with Burkitt lymphoma, who are HIV positive. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/genes12091302/s1, Table S1: Summary of miRNAs reported to be deregulated in Diffuse Large B-cell lymphoma, Table S2: Summary of miRNAs reported to be deregulated in Burkitt lymphoma, Table S3: MiRNAs (n = 188; 192a format) for microarray profiling, spotted in duplicate, including four controls.
v3-fos-license
2020-04-23T09:14:46.612Z
2020-04-21T00:00:00.000
241401657
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-21866/v1.pdf", "pdf_hash": "7cf2822574071e2d4d0780088e6ec04b5b24a02f", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1248", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "c9943f18d133bc43a9b7657f78fbfacc94ac5c35", "year": 2020 }
pes2o/s2orc
Variability of the Bacillus anthracis tryptophan operon Abstract Abstract Background Bacillus anthracis is a causal agent of a zoonotic disease relevant for many countries, and is an agent of bioterrorism. Meanwhile, the reasons for the dependence on tryptophan of some strains with altered virulence have not been established with an almost complete absence of information on the tryptophan operon of this pathogen. In this study, we report gene variability and the structure of the tryptophan operon in B. anthracis strains of the three main lineages. Results For in silico analysis we used 112 B. anthracis genomes, including 68 of those available at the GenBank database and 44 sequenced at our institute. The B. anthracis tryptophan operon has an ancestral structure with a complete set of seven partially overlapping genes. The results show that the variability of all seven tryptophan operon genes is determined by the presence of single nucleotide polymorphisms and InDels. The trpA genes of strains of the main lineage B and trpG genes of strains of the C lineage are pseudogenes and the proteomes lack the corresponding enzymes of the biosynthetic pathway, which may explain the dependence of the strains of line B on tryptophan. Conclusion In this study, the differences in tryptophan operon genes for B. anthracis strains belonging to different main lineages were demonstrated for the rst time. Mutation in the gene of the tryptophan synthase subunit alpha can explain the dysfunction of this enzyme and the dependence on tryptophan in strains of the main lineage B. Identi ed features suggest a further study of the dependence on tryptophan in B. anthracis strains of the main lineage B and may be of interest from the point of view of intraspeci c evolution of the anthrax pathogen. Background The causal agent of anthrax -Bacillus anthraciscauses a particularly dangerous zoonotic infection with a global range and is an agent of biological terrorism of group A [1]. The ability of the spore form of this bacterium to persist in soil foci for decades and cause poorly predictable disease outbreaks among livestock, often accompanied by human infections, makes anthrax a problem for public health and veterinary medicine in many countries, including Russia [2]. The research of anthrax infection and its causal agent has been the subject of numerous works by researchers all over the world, but despite the long history of research of B. anthracis, some of the properties of this pathogen remain poorly understood. Among them is the dependence on tryptophan in a number of strains whose virulence is reduced [3]. The tryptophan biosynthesis pathway is one of the branches of the general branched aromatic amino acid biosynthesis pathway which starts with chorismic acid. Tryptophan operon (Trp) is responsible for tryptophan biosynthesis. The genes and operons of the tryptophan biosynthetic pathway are organized differently in different types of bacteria. These differences re ect evolutionary divergence, as well as adaptation to unique metabolic capabilities and interactions with the environment [4]. Jacques Monod described tryptophan operon of Escherichia coli for the rst time in 1953. The Bacillus anthracis tryptophan operon contains genes for seven catalytic domains encoding ve enzymes, including two α/β subunit complexes -tryptophan synthase and anthranilate synthase: This is an ancestral structure of the operon, which includes a full set of speci c whole-pathway operons that is widespread among prokaryotes. For some organisms, genes of biosynthetic pathways may be scattered, for others -organized in two or more "split-pathway" operons. The question is what kind of evolutionary relationships exists between these three types of pathway genes organization. Trp operon is a perfect model for studying the biosynthetic pathways [5]. Mechanism of tryptophan dependence is not quite clear, and there is very little published information on trp operon of B. anthracis. One of the possible reasons of the trp dependence may be the mutations in genes determining enzymes of trp synthesis pathway. There is an important link between the organization and genomic context of the trp operon genes and the mechanism that regulates its expression. The regulatory mechanisms used to control the transcription of tryptophan biosynthesis genes in B. anthracis are still poorly understood. It is a known fact that unlike Bacillus subtilis, B. anthracis lacks trpRNA binding attenuator protein (TRAP), encoded by mtrB gene. Due to the low state of knowledge of the trp operon and the development mechanism of tryptophan auxotrophy in B. anthracis determines the relevance of this study. Our aim was to analyze the features of genes and structure of the trp operon of different B. anthracis strains. Results The comparison of nucleotide sequences of trp operon genes showed the following: trpA gene size of 33 strains of the main genetic line B is 651 b.p., for I-373 strain -650 b.p., for Tyrol 4675 strain -777 b.p. In addition to the trpG gene encoding the aminodeoxychorismate/anthranilate synthase component II in the tryptophan operon, the B. anthracis genome of all three main lineage contains the pabA gene encoding (MULTISPECIES: aminodeoxychorismate / anthranilate synthase component II). This protein is also synthesized by many strains of Bacillus cereus, Bacillus thuringiensis and other bacilli. The trpG and pabA genes are distinguished by multiple substitutions, InDels, as well as the proteins encoded by them. Discussion Reconstruction of the structure of the tryptophan operon of different B. anthracis strains showed that there are differences between its structure in strains of the main lineages (Fig 3). In strains of lineage B, due to a mutation in the trpA gene, which turns it into a pseudogene, the last step of the tryptophan biosynthesis pathway should be blocked, since the tryptophan synthase subunit alpha is absent. This circumstance may explain the dependence on tryptophan in strains of the main lineage B. . The high-resolution reference phylogeny, based on 11989 SNPs of genomes of 193 strains from the global collection, reveals that the next event after the separation of lineage C from A/B was the divergence of lineage C into sub clusters, then the separation of lines A and B [12]. Clade A divides into four main monophyletic subclades, from which, earlier than other subclades, formed the "Ancient A" clade, being the base for other subclades of this line. The base subclade of clade B may be subclade B.Br003, including subclade B.Br004 with strains from Europe, formed at about the same time as subclade A.Br.002, other subclades of line B include strain HYU01 from South Korea (subclade B.Br. 002), which appeared later, and nally, the strains of the clade B.Br.008 isolated in South Africa and Sweden [12]. According to our data, the subclade B.Br.002 contains, along with the isolate from Korea, strains isolated in Western Siberia (a separate cluster "Siberia") and Finland [13], although in an earlier work the strain from Finland was described as constituting a separate clade line B.Br.002 with the nearest strains HYU01 from South Korea and BF1 from Germany [14]. Based on the data described, it can be assumed that clade C, the oldest, with a minimum number of isolates, has become a blind branch of the evolution, which has not received further distribution outside the United States. Clades A and B, evolving independently, spread to varying degrees in different geographical areas, while there are local regions where strains of clades A and B exist at the same time, for example, Kruger Park in South Africa, and probably certain regions of the Russian Federation (Republic of Dagestan, Western Siberia). The fact that only 5 isolates of B. anthracis line C was isolated in North America only, a limited number of strains of line B and the wide distribution of strains of line A, suggests the ecological advantages of the latter, which are also associated with different functioning of tryptophan, and possibly other operons. In the strains of Bacillus cereus, the structure of the tryptophan operon is not different from that in B. anthracis, but the genes and corresponding proteins are mainly speci c for this species, although some proteins are identical in these two species and Bacillus thuringiensis. Conclusion General structure of the B. anthracis trp operon is conservative and is characterized by the presence of 7 partially overlapping genes. We have shown the difference in gene sequences and proteins of the biosynthetic pathway of the main lineages of the anthrax pathogen. In accordance with the nature of single nucleotide polymorphisms and InDels in the trpA and trpD genes, the studied strains are divided into two groups, one of which includes strains of the main lineages A and C, and the other -strains of lineage B. Due to a mutation in the trpA gene of the tryptophan synthase subunit alpha, which turns it into a pseudogene, the last step of the tryptophan biosynthesis pathway should be blocked, which may explain the dependence on tryptophan found in several B. anthracis strains of the main lineage B. It remains unknown whether tryptophan dependence is inherent in all strains of this line. The presence of the trpG pseudogene in strains of the main lineage C and the inability to synthesize of anthranilate synthase component II can be probably compensated for by expressing glutamine amido transferase activity of the functional pabA gene outside the tryptophan operon. Worldwide distribution of line A strains suggest their ecological advantages, which can be associated in particular with full functioning of tryptophan operon. The revealed features suggest a further study of tryptophan dependence in B. anthracis strains of the main lineage B and may be of interest from the point of view of intraspeci c evolution of the anthrax causal agent. Bacterial strains In our study we have used 112 genomes of the B. anthracis strains. 44 strains of them, sequenced in our study, are from the State Collection of Pathogenic Microorganisms of Stavropol Research Anti-Plague Institute (Table 1) and 68 genomic sequences of the B. anthracis strains -from GenBank (Additional le 1: Table S1). Growth of B. anthracis and extraction of DNA B. anthracis strains were cultivated on the blood agar, then inactivated, and DNA was extracted with the use of DNA extraction kit QIAamp DNA Mini Kit (Qiagen, Germany) according to manufacturer's protocol and the requirements of biological safety rules when working with pathogens of the third group of pathogenicity. DNA concentration was quanti ed using the dsDNA HS Qubit assay kit (Thermo Fisher Scienti c, USA) according to the manufacturer's protocol. DNA preparations were stored at − 20 °C until further use. Whole genome sequencing The preparation of genomic libraries with a 400 bp read length was performed using the Ion Xpress Plus Fragment Library Kit reagent kit (Life Technologies, USA) in accordance with the manufacturer's protocol. Monoclonal ampli cation on microspheres was performed using Ion PGM Hi-Q View OT2 Kit reagents (Life Technologies, USA). Genome sequencing was performed using an Ion Torrent PGM sequencer and Ion 316 Chips Kit V2 chips (Life Technologies, USA). Bioinformatics analysis We conducted mutation search in genomes in silico via CLC Sequence Viewer 6 [15] and MEGA V.10.0.5 [16] programs, using genome of the Bacillus anthracis Ames Ancestor strain as reference and data on genes and enzymes of the trp operon from GenBank. Phylogenetic analysis was performed via Maximum Likelihood (bootstrap 1000) method in MEGA V. 10
v3-fos-license
2018-12-06T08:51:02.270Z
2015-11-24T00:00:00.000
55829628
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/ijc/2015/197587.pdf", "pdf_hash": "370091470be715f4c041a751897ddaa80162d33e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1249", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "sha1": "370091470be715f4c041a751897ddaa80162d33e", "year": 2015 }
pes2o/s2orc
Pomegranate (Punica granatum) Peel Extract as a Green Corrosion Inhibitor for Mild Steel in Hydrochloric Acid Solution The inhibition effect of pomegranate peel extract (PPE) on the corrosion of mild steel in hydrochloric acid (HCl) solution was investigated.Thepolarization,mass loss, and electrochemical impendence techniqueswere used to evaluate the corrosion inhibition performance of the pomegranate peel extract. The results revealed that PPE acts as a corrosion inhibitor in HCl solution. The inhibition efficiency increased with the increase of extract concentration. The inhibition action was attributed to the adsorption of the chemical compounds present in the extract solution, on mild steel surface. Introduction Corrosion inhibitors are widely used in industry to reduce the corrosion rate of metals and alloys in contact with aggressive environments.Most of the corrosion inhibitors are synthetic chemicals and are expensive and very hazardous to the environment.Therefore, finding cheap and environmentally safe inhibitors is needed [1][2][3].There are some reports on the inhibition effects of nontoxic compounds on the corrosion of metals [4][5][6][7][8][9].The inhibition effect of amino acids on steel [1] and aluminum [10] corrosion and Prunus cerasus juice [11] on steel corrosion in acidic media has being reported.On the other hand, rare earth metals have been proposed as corrosion inhibitors [12][13][14][15].The inhibition effects of some nontoxic organic compounds have been also reported for steel corrosion [16,17] but they are quite expensive. Pomegranate (Punica granatum L.) is one of the important fruits grown in Turkey, Iran, the USA, the Middle East, and Mediterranean and Arabic countries.It is originated from southeast Asia [18].The edible part of the fruit contains considerable amount of acids, sugars, vitamins, polysaccharides, polyphenols, and important minerals [19,20].It has been reported that pomegranate juice has potent antiatherogenic effects on the health of humans and atherosclerotic effects on mice that may be attributable to the juice's antioxidative properties [21].The peel part of pomegranate contains hydroxyl, carbonyl, and aromatic groups with considerable amount of punicalagin [22], punicalin [23], granatin A [24], granatin B [25], maleic acid [26], ursolic acid [27], gallic acid [28], and antioxidant materials [29].These substances with effective constitutive chemical groups in their structure could show corrosion inhibition performance. The aim of this study is to investigate the inhibition effect of PPE as a cheap, raw, and nontoxic inhibitor for steel corrosion in hydrochloric acid.Electrochemical measurements and mass loss methods were employed to evaluate and investigate the inhibition efficiency of pomegranate peel extract (PPE). Materials and Alloy Samples. The metal substrate used in this work was mild steel, which had the chemical composition as shown in Table 1.The pomegranate peel powder was purchased from a local market in Tabriz, Iran, and the HCl 37% (Merck, Germany) was used to prepare the aggressive solution (1 M HCl).The concentrations of PPE, employed as inhibitor, were 0.0625, 0.125, 0.25, 0.5, and 1% v/v in 1 M HCl. All solutions were prepared using the double-distilled water.In the weight loss method for corrosion evaluation, the steel samples with dimensions of 2 cm × 1 cm × 0.01 cm were immersed in the testing solutions for 24 h at 25 ∘ C. Sifted pomegranate peel powder by a 60-mesh sieve was employed for preparation of solution.Then, the solution slowly was stirred and dried at 30 ∘ C for about 4 hours and filtered to obtain a homogenous solution.Moreover, methanol was extracted from the mixture by rotary distillation method [30].Finally, in this procedure, 13.8 g of extract was obtained from 100 g pomegranate peel powder and was stored in a glass bottle for future uses. Electrochemical Measurements. Electrochemical experiments were carried out using an Autolab Potentiostat-Galvanostat (PGSTAT 30).A conventional three-electrode system applied for electrochemical studies.The working electrode was a mild steel sheet, mounted in polyester, so that the exposed area was one cm 2 .The test surfaces of the specimens were polished with emery paper number 400 to 1200 grade, then cleaned by acetone, washed with doubledistilled water, and finally dried at room temperature before immersion in the test solutions.A saturated calomel electrode and a platinum sheet (approximately one cm 2 surface area) were used as reference and counter electrodes, respectively.In the case of polarization measurements, the scan rate was one mV/s.The immersion time to achieve an equilibrium potential before each electrochemical measurement was 40 min.The impedance measurements were carried out in the frequency range of 10 kHz to 10 mHz at the open circuit potential, by applying 10 mV sine wave ac voltage.In addition, the impedance data was analyzed using ZView (II) software to determine the parameters of the proposed equivalent electrical circuit models. Meanwhile, the constant phase element (CPE) and the charge transfer resistance ( ct ) were calculated from Nyquist plots as described elsewhere [30].It should be noted that all experiments were performed under the atmospheric ambient.Furthermore, the solutions temperature was controlled using a Memmert thermostat (Germany) in all experiments. Results and Discussion 3.1.Electrochemical Tests.Tafel polarization technique was used to evaluate the corrosion inhibition efficiency of PPE. Figure 1 shows Tafel curves for potentiodynamic behavior of mild steel in hydrochloric acid containing the different concentration of extract inhibitor.Also, electrochemical parameters including corrosion potential, corr , polarization resistance, , cathodic Tafel slop, , anodic Tafel slop, , corrosion current density, corr , and inhibition efficiency percent, IE%, derived from the Tafel curves are given in Table 2. The following equation was employed to calculate inhibition efficiency (IE) from polarization measurements [2,7]: where and are the corrosion current densities obtained by extrapolation of the cathodic and anodic Tafel lines, in inhibited and uninhibited solutions, respectively.It is possible to infer that the inhibition efficiency improved with increasing the concentration of PPE inhibitor.This may be attributed to the improvement in PPE adsorption on metal surface, which leads to its corrosion protection performance.Furthermore, it is clear that, in the presence of PPE inhibitor, while the corr values shifted to more positive, there is no considerable variation with different concentration of inhibitor.It is well known that the pomegranate peel contains the organic acids with phenolic, hydroxyl, and carbonyl groups and antioxidant materials [29,31]; some of these organic compounds have been used as organic corrosion inhibitors for metals [32].On the other hand, the adsorption of organic molecules such as phenolic compounds (gallic acid and granatin B, Figure 2) may be due to the presence of an oxygen atom (a heteroatom), electron of aromatic rings, and electron donating groups.In other words, the heteroatoms such as oxygen are the major adsorption center in organic compounds for its interaction with the metal surface [33].The adsorption can also occur via electrostatic interaction between a negatively charged surface, which is provided by a specifically adsorbed anion (Cl − ) on iron, and the positive charge of the inhibitor [34].Furthermore, according to literature [35] organic acid component itself can form passive film on substrate surface.From the presented investigation, it can be deduced that the inhibitor molecules adsorb on the metal surface by blocking the active corrosion sites [17]. As a result, the presence of PPE caused a potential shift toward more positive values, indicating the anodic reaction is inhibited.We can conclude that decreasing the oxidation rate of substrate in corrosive media is due to the formation of protective layer. Figures 3 and 4 show the Nyquist diagrams in the absence and presence of different concentrations of PPE inhibitor.One capacitive depressed semicircle is present for the samples in the Nyquist diagrams.Analysis of the impedance spectra was performed by means of Randles circuit, which is the most common equivalent circuit.In addition, Table 3 presents the calculated data obtained by ZView fitting program. and ct are the solution resistance and the charge transfer resistance, respectively, and the CPE is constant phase element for the double-layer capacitance.Blank The ct values are calculated from the difference in impedance at lower and higher frequencies, as suggested by Elkadi et al. [36].To obtain the double-layer capacitance ( dl ), the frequency at maximum (− max ) imaginary component of impedance was determined and dl values were calculated from the following equation [36]: International Journal of Corrosion It is well known that the corrosion of mild steel is obviously inhibited in the presence of the inhibitor and the ct values significantly improved by increasing the PPE inhibitor concentration.The value of ct increases from six Ω cm 2 (in the absence of PPE inhibitor) to 378 Ω cm 2 (in the presence of PPE inhibitor) and corresponding dl value decreases from 1.836 × 10 −4 to 1.24 × 10 −5 F cm −2 .In other words, as the inhibitor concentration increased, the ct values increased, but the dl values tended to decrease.This is mainly due to the adsorption of inhibitor on the metal surface [37].In the case of the electrochemical impedance spectroscopy, the inhibition efficiency was calculated using the charge transfer resistances as follows [36]: where ct and ct(inh) are the charge transfer resistance values in the absence and presence of inhibitor, respectively. Comparing the calculated results confirmed that the inhibition efficacy (IE%) of PPE inhibitor was enhanced by increasing its concentration.This phenomenon is consistent with results obtained by polarization method.Therefore, there is relatively good agreement between the polarization resistances, obtained from both electrochemical methods in the presence of high concentration range of PPE. 4 shows the weight loss data of mild steel in 1 M HCl in the absence and presence of various concentrations of inhibitors.The corrosion efficiencies ( %) were calculated according to [38] by the following equation: Weight Loss Measurements. Table where corr and corr(inh) are the weight loss of mild steel in the absence and presence of the inhibitor, respectively. The results show that the corrosion efficiency increases with increasing inhibitor concentration.These observations verify the results obtained from polarization and EIS measurements in the presence of high concentration range of extract. Conclusion The obtained results from electrochemical and weight loss methods showed that the pomegranate peel extract acts as a nontoxic, cheap, and easily prepared inhibitor for corrosion of mild steel in hydrochloric acid media.Corrosion inhibition action of pomegranate juice increases as its concentration increases in the corrosive solution.Moreover, inhibition behavior of pomegranate juice may be explained by adsorption of constitutive organic compounds in PPE on the metal surface. Figure 1 : Figure 1: The representative Tafel polarization plots for mild steel corrosion in 1 M HCl solution in the absence and presence of different concentration of PPE inhibitor. Figure 2 : Figure 2: The molecular structures of granatin B and ursolic acid. Figure 3 : 2 )Figure 4 : Figure 3: The representative Nyquist diagrams for mild steel corrosion in 1 M HCl solution in the absence of inhibitor at 25 ∘ C. Table 1 : Chemical composition of mild steel specimens obtained from quantometric method. Table 2 : The corrosion parameters obtained from polarization plots of mild steel in blank and inhibited HCl solution. Table 3 : The parameters obtained from EIS measurements of mild steel in blank and inhibited HCl solution. (Ω cm 2 ) ct (Ω cm 2 ) dl (F cm −2 ) IE% Table 4 : Weight loss data of mild steel in blank and inhibited HCl solution.
v3-fos-license
2024-03-17T15:45:09.223Z
2024-03-12T00:00:00.000
268457335
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/266797/251747", "pdf_hash": "44d40ea18fcbeff5f147b3e2fb0827ff00cc5189", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1250", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8fdf98ae5729991c9919c2a11ae9abda8862758d", "year": 2024 }
pes2o/s2orc
Effect of dapagliflozin on blood glucose control, cardiac function, and myocardial injury markers in patients with type 2 diabetes and heart failure Purpose: To investigate the impact of dapagliflozin treatment on blood glucose control, cardiac function, and myocardial injury markers in patients with type 2 diabetes and heart failure. Methods: In a retrospective analysis of clinical data for 132 patients with type 2 diabetes and heart failure admitted to Beijing Tongren Hospital, China from January 2020 to June 2021, these patients were stratified into two groups (66 patients each). Control group received conventional pharmacotherapy and study group received additional treatment with dapagliflozin. Both treatment courses lasted for 6 months. The levels of blood glucose control, cardiac function, and myocardial injury markers before and after 6 months of treatment were compared between the two groups, as well as safety during treatment. Results: After 6 months of treatment, both groups exhibited significant reductions in fasting plasma glucose (FPG), 2-h postprandial glucose (2 h PG), glycated hemoglobin (HbA1c), N-terminal pro-brain natriuretic peptide (NT-proBNP), cardiac troponin I (cTnI), creatine kinase-MB isoenzyme activity (CK-MB), aspartate aminotransferase (AST) levels, left ventricular end-diastolic diameter (LVEDD), left ventricular end-systolic diameter (LVESD), and Minnesota Living with Heart Failure Questionnaire (MLHFQ) score, with study group showing a greater improvement (p < 0.05). Conclusion: Dapagliflozin enhances blood glucose control and cardiac function, improving quality of life in patients with type 2 diabetes and heart failure. Furthermore, Dapagliflozin demonstrates a safe and well-tolerated profile. Future studies will require establishing the mechanism of dapagliflozin action in a larger and more diverse population. INTRODUCTION Type 2 diabetes is a common metabolic disease that induces various cardiovascular diseases.Chronic heart failure is the end stage of various heart diseases.Patients with type 2 diabetes and heart failure have complex conditions, characterized by poor treatment outcomes and severe symptoms of heart failure [1,2]. Choosing antidiabetic drugs for patients with type 2 diabetes and heart failure is more complicated than for patients with general diabetes Not only is there a need to consider blood glucose control, but there is also the need to observe whether the drug improves the symptoms of heart failure and ultimately improves prognosis.At present, clinical treatment predominantly centers around metformin for managing hypoglycemia.However, for certain patients, the use of metformin alone may not yield satisfactory results in addressing relative insulin deficiency [3,4]. Dapagliflozin is a sodium-glucose cotransporter 2 (SGLT2) inhibitor that improves blood glucose without increasing insulin secretion.However, the application effect and mechanism of dapagliflozin in type 2 diabetes and heart failure have not been fully clarified.Building upon this premise, a retrospective study involving 132 patients diagnosed with both type 2 diabetes and heart failure was conducted.The study aimed to investigate the impact of dapagliflozin on blood glucose regulation, cardiac function, and myocardial injury markers in this specific patient population. General information The clinical data of 132 patients with type 2 diabetes and heart failure admitted to Beijing Tongren Hospital, China from January 2020 to June 2021 were retrospectively analyzed.These patients were divided into control group and study group according to their treatment plan, with 66 cases in each group.The study was conducted following the Declaration of Helsinki [5] and was approved by the ethics committee of Beijing Tongren Hospital, (approval no.2020-05-012).All the patients were informed about this study and signed the informed consent form. Control group consisted of 35 males and 31 females, with an age range of 43 to 79 years and an average age of 65.86 ± 5.56 years.The New York Heart Association (NYHA) functional classification was grade II in 42 cases and grade III in 24 cases.The body mass index (BMI) ranged from 18 to 30 kg/m 2 , with an average of 22.02 ± 1.20 kg/m 2 .Study group consisted of 33 males and 33 females, with an age range of 41 to 73 years and an average age of 66.12 ± 6.28 years.The NYHA functional classification was grade II in 39 cases and grade III in 27 cases.The BMI ranged from 18 to 29 kg/m 2 , with an average of 21.95 ± 1.13 kg/m 2 .There was no significant difference in the general information between the two groups (p > 0.05). Inclusion criteria The following patients were admitted into this study: Patients who meet the diagnostic criteria for type 2 diabetes as outlined in the "Guidelines for the Prevention and Treatment of Type 2 Diabetes in China (2017 edition)" [6], exhibit symptoms such as polyuria, polydipsia, polyphagia, and weight loss; Patients with FPG levels exceeding 7.0 mmol/L or random blood glucose levels exceeding 11.1 mmol/L; patients who fulfill the diagnostic criteria for heart failure, as described in "Internal Medicine" [7], showing evidence of organic heart disease, reduced exercise tolerance, and fluid retention, and confirmed by imaging and laboratory tests; patients without contraindications for the study drug, are conscious and free from mental illness. Exclusion criteria Patients with malignant tumors or severe infections; patients who have undergone heart transplantation or left ventricular assist device implantation; pregnant women with gestational diabetes; patients with severe gastrointestinal diseases that may affect drug absorption; patients with liver and kidney dysfunction; individuals with a history of drug abuse or alcoholism, and so on were all excluded from the study. Drug administration Control group of 66 patients received conventional drugs for the treatment of heart failure, including diuretics, statins, antiplatelet drugs, and nitrate drugs.Study group of 66 patients received dapagliflozin in addition to conventional treatment given to control group.Both groups were treated for 6 months.The hypoglycemic regimen for control group was oral metformin hydrochloride tablets (0.25 g, Guoyao Zhunzi H22021184, Changchun Boao Biochemical Pharmaceutical Co. Ltd.), 0.5 g per dose, twice daily.In addition to this, study group received dapagliflozin tablets (10 mg, Guoyao Zhunzi H20213836, Beijing Fuyuan Pharmaceutical Co. Ltd.) once daily in the morning, after breakfast.Both groups were treated for 6 months. Blood glucose control Before and 6 months after treatment, 2 mL of fasting venous blood samples and 2 mL of venous blood samples 2 h after meals were collected from both groups.Serum was obtained after centrifugation at 3,000 rpm for 10 min and FPG and 2 h PG were measured using hexokinase method.The reagent kit was provided by Shanghai Gaotrace Medical Equipment Technology Co. Ltd.Serum HbA1c levels were assessed via immunofluorescence chromatography using serum prepared from fasting venous blood samples.The kit was provided by Jiangxi Dayou Medical Technology Co., Ltd. Cardiac function and quality of life Before and 6 months after treatment, the LVEDD, LVESD, and left ventricular ejection fraction (LVEF) were measured using a color Doppler ultrasound diagnostic instrument (Vivid iq) provided by General Electric Medical Systems (China) Co. Ltd.The MLHFQ [8] was used to evaluate the quality of life of the two groups.The MLHFQ covered physical, emotional, and other domains, with scores ranging from 0 to 105 points.The higher the score, the worse the patient's quality of life. Cardiac injury markers Blood samples were collected before and 6 months after treatment, and serum was prepared as detailed in section 1.3.1.Serum levels of NT-proBNP, cTnI, CK-MB, and AST were quantified using enzyme-linked immunosorbent assay.The reagent kit was provided by Roche Diagnostics GmbH, Germany. Safety Detailed scrutiny was conducted during the treatment period to evaluate the frequency of hypoglycemic events, occurrences of hypotension, gastrointestinal reactions, and any indications of liver function impairment in both groups. Statistical analysis The SPSS 21.0 statistical software was used for data analysis.Continuous variables are presented as means ± standard deviation (SD) and were compared between groups using independent samples t-tests, while paired samples t-tests were employed for within-group comparisons. Categorical variables are presented as n (%) and were compared using the chi-square test.Statistical significance was established at p < 0.05. Blood glucose control status Compared with before treatment, FPG, 2 h PG, and serum HbA1c levels in both groups decreased after 6 months of treatment, and the reduction was more significant in study group (p < 0.05; Table 1). Cardiac function and quality of life Compared with before treatment, the LVEDD, LVESD and MLHFQ levels were decreased in both groups after 6 months of treatment, and the reductions were greater in study group (p < 0.05).The LVEF levels increased in both groups after treatment, and the increase was greater in study group (p < 0.05; Table 2. Cardiac injury marker Compared with before treatment, the serum NT-proBNP, cTnI, CK-MB, and AST levels in both groups were decreased after 6 months of treatment, and the reduction was more significant in study group (P < 0.05; Table 3). Safety during treatment The incidence of adverse reactions was compared between the two groups during treatment, and the analysis revealed no statistically significant difference (p > 0.05; Table 4). DISCUSSION Based on prior research [9,10], it is evident that type 2 diabetes has evolved into a significant public health concern.Persistent hyperglycemia induces myocardial inflammation, exacerbates oxidative stress, and leads to microvascular dysfunction and coronary artery disease, ultimately resulting in heart failure.Both type 2 diabetes and heart failure are chronic and incurable diseases, that seriously threaten patients' life and health, as well as consume a large number of medical resources.Consequently, there is a pressing need for effective treatment strategies in clinical practice.Currently, metformin is the cornerstone of blood glucose control for type 2 diabetes.However, as the disease progresses, monotherapy may not be effective and its role in improving heart failure is insufficient.This study introduced dapagliflozin as a treatment for type 2 diabetes with heart failure and achieved certain results.The kidney is an important organ for regulating blood glucose.SGLT2, which is mainly expressed in the kidney, is located on the luminal side of the S1 segment of the proximal tubule and promotes glucose reabsorption.Dapagliflozin is the world's first approved non-insulin-dependent hypoglycemic agent.It mainly reduces glucose reabsorption in the proximal tubule and lowers blood glucose levels by lowering the pathological threshold for renal glucose reabsorption and promoting urinary glucose excretion [11,12].Taylor et al [13] showed that dapagliflozin is effective in treating type 2 diabetes, and effectively regulates sugar and lipid metabolism, Liu & Zhao Trop J Pharm Res, February 2024; 23(2): 391 thus continuously controlling blood glucose levels.Shin et al [14] also reported that dapagliflozin exhibits a robust targeting effect with a reduced likelihood of causing gastrointestinal reactions or liver damage.It primarily adjusts to elevated blood glucose levels, thereby reducing the risk of hypoglycemic or hypotensive symptoms In this study, after 6 months of treatment, the FPG, 2h PG, and serum HbA1c levels of study group were lower than those of control group.The incidence of adverse reactions during treatment did not show statistically significant differences between the two groups.This implies that dapagliflozin treatment for type 2 diabetes and heart failure effectively enhances blood glucose control in patients while maintaining a favorable safety profile.Mitigating myocardial damage, enhancing cardiac function, and effectively managing disease progression are pivotal objectives in the clinical treatment of individuals with both type 2 diabetes and heart failure.In this study, after 6 months of treatment, the LVEDD, LVESD, MLHFQ scores, and serum levels of NT-proBNP, cTnI, CK-MB, and AST were lower in the treatment group, while LVEF was higher.These findings indicate that dapagliflozin treatment for individuals with type 2 diabetes and heart failure effectively enhances cardiac function and quality of life.This improvement may be attributed to dapagliflozin's capacity to mitigate myocardial damage markers.The mechanism behind this lies in dapagliflozin's ability to reduce chronic systemic inflammation resulting from the accumulation of visceral and subcutaneous fat, consequently managing myocardial damage and preventing the release of cTnI, CK-MB, and AST from myocardial components into the bloodstream [15].Dapagliflozin's osmotic diuretic properties contribute to the reduction of both blood pressure and blood volume.This effect helps maintain a proper balance of body water and sodium, subsequently alleviating both pre-and post-load stress on the heart.In doing so, it inhibits the release of NT-proBNP under increased cardiac load, effectively managing myocardial fibrosis, reducing ventricular remodeling, and lowering both LVEDD and LVESD [16,17].Dapagliflozin additionally promotes the conversion of fatty acids to ketones, elevating ketone levels within myocardial cells.This effect leads to an enhancement of myocardial energy metabolism, a reduction in the activity of membrane ion exchange proteins at the membrane's active center, an increase in mitochondrial calcium levels in the heart, and ultimately an augmentation of myocardial contractility.Giugliano et al [18] have also demonstrated the efficacy of dapagliflozin in the treatment of diabetes with concurrent heart failure.Furthermore, dapagliflozin is shown to contribute to improvements in endothelial function, a reduction in inflammatory factors within the body, and subsequently, a decrease in myocardial cell damage, aligning with the findings of this study. Study limitations The sample size was small and only a single study center was used.Therefore, this result cannot be applied on a global scale.The mechanism of dapagliflozin enhancement therapy for type 2 diabetes and heart failure has also not been established in this study. CONCLUSION Dapagliflozin treatment for individuals with type 2 diabetes and heart failure results in significant enhancements in blood glucose control, cardiac function, and overall quality of life.These improvements are closely associated with its capacity to reduce myocardial injury markers.Furthermore, it exhibits a reasonable safety profile.Future studies will require establishing the mechanism of dapagliflozin on blood glucose regulation, cardiac function, and myocardial injury markers in a larger and more diverse population. Table 1 : Comparison of blood glucose control between the two groups before and after treatment for 6 months (n=66) Table 2 : Comparison of cardiac function and quality of life between the two groups before and after treatment (n = 66) Table 3 : Comparison of cardiac injury markers between the two groups before and after treatment (n=66) Table 4 : Comparison of safety between the two groups during treatment (n=66)
v3-fos-license
2018-04-03T04:25:48.862Z
2016-10-04T00:00:00.000
11475567
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep34585.pdf", "pdf_hash": "3cb3c963fd50e3d4cddabbc95033e6b038b51525", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1251", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "3cb3c963fd50e3d4cddabbc95033e6b038b51525", "year": 2016 }
pes2o/s2orc
First Direct Evidence of Long-distance Seasonal Movements and Hibernation in a Migratory Bat Understanding of migration in small bats has been constrained by limitations of techniques that were labor-intensive, provided coarse levels of resolution, or were limited to population-level inferences. Knowledge of movements and behaviors of individual bats have been unknowable because of limitations in size of tracking devices and methods to attach them for long periods. We used sutures to attach miniature global positioning system (GPS) tags and data loggers that recorded light levels, activity, and temperature to male hoary bats (Lasiurus cinereus). Results from recovered GPS tags illustrated profound differences among movement patterns by individuals, including one that completed a >1000 km round-trip journey during October 2014. Data loggers allowed us to record sub-hourly patterns of activity and torpor use, in one case over a period of 224 days that spanned an entire winter. In this latter bat, we documented 5 torpor bouts that lasted ≥16 days and a flightless period that lasted 40 nights. These first uses of miniature tags on small bats allowed us to discover that male hoary bats can make multi-directional movements during the migratory season and sometimes hibernate for an entire winter. Individuals of several species of North American bats make biannual migratory journeys between winter and summer habitat 1 , yet compared to birds, our understanding of the details and destinations is nascent. In relative terms, we have extensive knowledge of bird migration that stems from the ease with which humans can observe seasonal changes in species occurrence and obvious group movements over continental scales, as well as the ability of larger animals to carry tracking devices [2][3][4] . Studying migration in the smallest flying animals remains a challenge. Long-distance movements of small (< 30 g) birds were initially revealed through extensive banding (ringing) efforts 2 , and later by incorporating isotope analyses 5 . Recently, breakthroughs concerning the seasonal whereabouts, flight paths, and long-term activity patterns of small migratory birds have also been made possible by miniature global-positioning-system (GPS) tags 6 and data-recording environmental sensors (hereafter data loggers [7][8][9] ). Recent increases in our understanding of bird-migration discoveries were made using miniaturized (1-2 g) tracking and sensor devices [10][11][12] , which augurs well for advancing understanding of migration and seasonal behaviors in a particularly difficult-to-study group of long-distance migrants-small bats. As in birds 13 , there is growing recognition that effective conservation of bats requires understanding of their needs beyond the summer breeding season and winter hibernation periods when they are easiest to study 14 . It has long been known that certain species of bats migrate 15,16 , but studying bats is extremely challenging because of their ubiquitously cryptic nocturnal activity patterns and secretive roosting. These difficulties have left broad gaps in our understanding of many bats, but particularly the small-bodied, long-distance migrants. Efforts to study bat migration have employed methods such as banding individuals [17][18][19] , visual observations and captures 20,21 , compiling seasonal maps of occurrence records 22,23 , genetic analyses 24,25 , radiotracking 26,27 , and stable isotope analyses [28][29][30][31][32] . However, the success rates and spatial or temporal resolution of such methods remain low. Although new technology recently enabled following the long-distance movements of large (> 100 g) bats 33,34 , data on the movement patterns and behaviors of small migratory bats do not exist. Yet, as with birds 3,6 , knowledge Scientific RepoRts | 6:34585 | DOI: 10.1038/srep34585 of movements and behaviors of individual animals can provide important insights into their ecology and ultimately help provide for their conservation. In North America, so-called "migratory tree bats" (sensu 35 ) are thought to undertake some of the longest seasonal movements of any bat species. Hoary bats (Lasiurus cinereus) roost individually in the foliage of trees at low density and, despite a wider distributional range than most mammals, are rarely encountered through vast areas of their range 36 ; these characteristics combine to make them one of the most poorly understood migratory tree bats. Seasonal distribution patterns inferred from occurrence records and stable isotope analyses indicate that hoary bats generally migrate southward and towards coasts from their summer range to overwinter 22,23,28,29,37 . Migration is often defined as seasonally mediated, directional movements between habitats 2 . Hence, although it can be expected that animals select migratory routes that minimize energetic costs, the precise movements of individual hoary bats were unknown. Conventional understanding has been that hoary bats move to areas of moderate climate for the winter which allows them to make frequent use of daily torpor interspersed with occasional feeding when insects are active 38 . However, evidence of extensive cold-season activity by hoary bats in any of their potential wintering areas is lacking. Here, we describe our successful use of two types of miniature data-recording devices that allowed us to gain new insights into the ecology and behavior of individual hoary bats. We sutured GPS tags to male hoary bats and obtained multiple site locations that allowed us to infer long-distance movements of individuals during the autumn migration period. To other bats, we attached data loggers that recorded light level, temperature, and activity, from which we obtained detailed information on individual hoary bats over periods spanning as long as an entire winter. These data recorded from small, free-ranging, migratory tree bats are the first of their kind and allow us to challenge two assumptions about hoary bats: (1) that their autumn migration routes are directional and generally linear and (2) that, unlike smaller cave-dwelling bats, they do not hibernate or use sequential bouts of multi-day torpor during winter. Results Autumn Movements. We attached GPS tags to 8 male hoary bats in late September 2014 and recovered 3 of them after they had recorded GPS locations (hereafter 'fixes'). In total, we obtained 2, 4, and 6 GPS fixes per bat that were recorded during October 2014. GPS data revealed 3 different behaviors of the bats we tracked: site fidelity, local (< 100 km) movements, and long-distance (> 100 km) movements. We recaptured Bat 479 on 4 different nights and obtained 4 GPS fixes from it over a period of 26 days. The longest movement recorded for Bat 479 was 6.4 km between its first GPS fix and its first recapture location. We recaptured Bat 481 twice following tag attachment leading us to document movements of 51. We recovered 1 of them that had recorded 9 days of data during autumn 2014 and another that had recorded 224 days of data from autumn 2014 through spring 2015 (Fig. 2). We obtained simultaneous activity data from both bats from Sep 27 to Oct 6 2014 (Fig. 3). While both bats were mostly active throughout the entire night, on Sep 30 and Oct 1 both were only active during the first half of the night. Ambient temperatures on both of these nights reached as low as 9 °C, whereas they remained ≥ 12 °C on other nights during this period (Fig. 3). Both bats entered torpor on the evening of Oct 1 as evidenced by cessation of activity and tag temperatures conforming to ambient temperatures. Differences in temperature sensor readings between the bats carrying data loggers were sometimes noted. We speculate that differences in tag temperature sensor readings between bats carrying data loggers could be associated with them occupying different areas. For example, on the evening of Oct 4, when the blue-tagged bat was active and presumably flying in an area that was about 10 °C warmer than the area where the yellow-tagged bat was active (Fig. 3). We obtained a near-continuous record of activity, light exposure, and tag temperatures of a male hoary bat over 224 days from autumn through spring. Activity occurred on 31 of 34 nights in September and October, but then declined sharply in early November and did not increase substantially until late April (Fig. 2, Table 1 (Table 1). During the period from Feb 03 to Apr 12 2015 the bat began flight activity an average of 53 minutes (range: 37-77 minutes) after sunset and was active for an average of 54 minutes (range: 30-80 minutes) (Fig. 4). By comparing tag temperatures to ambient temperatures at weather stations in the region (Figs S1 and 2) we inferred that the yellow-tagged bat likely overwintered in the vicinity of where we captured it. Excluding two arousals that were not associated with flight, mean temperature of the tag during the 40-day inactive period that included January was 10.1 °C (range = 1-23). On average, the tag was 0.9 °C (range = − 7.0-9.0) warmer than air temperatures at KFOT station (25 km NNW of capture area) and there was strong temporal correlation between the two temperatures (r = 0.88, Fig. 2) indicating that the bat was in torpor during this time. Similarly the bat remained in torpor during an 18-day period in mid-March despite the mean temperature of the tag reaching 14.3 °C (range = 8-27). We recorded 20 arousals from torpor by the bat between Nov 02 2014 and Apr 12 2015 (Fig. 4), 4 of which were not associated with flight. Arousals were more frequent and generally longer in duration during November and December than during January-April when they occurred, on average, 12 days apart. The mean duration of arousals over the entire winter was 199 (range 55-385) minutes. Discussion We used two new types of technology to assess the ecology and behavior of hoary bats during migration and over-wintering. GPS tags allowed us to determine that some individuals make long distance, multi-directional movements during autumn while data loggers allowed us to demonstrate that hoary bats can engage in winter-long hibernation. The three male hoary bats we followed exhibited a variety of movement behaviors during autumn. For one bat we had no evidence that it vacated the general vicinity of where it was captured, whereas another bat flew at least 68 km straight line distance in single night and a third completed a > 1000-km circumnavigation of northern California, Oregon, and Nevada over the course of a month. Hence our results demonstrate the possibility that some individuals may not engage in relatively simple, directional movements during autumn. The reason for long-distance, round trip travel exhibited by the male hoary bats in our study is enigmatic. It is possible that the long-distance movements we documented were associated with bats seeking favorable conditions of temperature and humidity for roosting and foraging 39 . Although this explanation may account for movements to and from areas dozens of km away, it does not seem sufficient, energetically, to explain movements of > 300 km from the study area. Another hypothesis, based on synchrony between autumn migration and mating readiness in hoary bats 40 , is that the male bats we tracked were trying to intercept and mate with females migrating to wintering grounds. Hoary bats are inarguably a migratory species, yet we have shown with a single individual that they are capable of hibernating for a period of 6 months during winter. Although early laboratory research indicated that species of Lasiurus may be well-adapted for hibernation 41 , subsequent observations of free-ranging hoary bats using radio-telemetry had only documented multi-day bouts of torpor during summer 42 and periods lasting less than one month during winter in eastern red bats (Lasiurus borealis) 43,44 . The number and length of torpor bouts and frequency of arousal we observed in a male hoary bat, particularly in late-winter, was generally similar to what has been observed for bats hibernating in caves and mines [45][46][47] . In fact, the bat we monitored remained in hibernation despite ambient temperatures at which it was active in the study area during autumn and which insect prey was likely available. Furthermore, the bat appeared to retain its circadian rhythm, because its arousals coincided with sunset (Fig. 4). Maintenance of a dusk-arousal circadian rhythm throughout winter has also been documented in cave-hibernating bats that live in regions with mild winters, whereas hibernating cave bats in regions with harsher winters tend to lose dusk-arousal rhythms during mid-winter 45,48,49 . Further, on 4 occasions the bat re-warmed without taking flight demonstrating that, as in other hibernating bats, arousals can be motivated by needs other than feeding 49 . These observations support the suggestion that hibernation is a conserved trait in temperate-zone bat species 50 . Knowledge that hoary bats can move long distances in non-linear ways and hibernate during winter may have practical impacts. For example, hoary bats frequently collide with wind turbines during autumn 51 and are currently presumed to be safe from white-nose syndrome, an emerging disease that heretofore has only impacted cave-hibernating bats 52 . Continued use and enhancement of the tracking technologies we demonstrated on hoary bats could help advance understanding of bat biology, as well as some of the most important conservation issues currently involving bats. Methods Tag Attachment. We attached two types of tags to bats: programmable GPS tags with and without VHF transmitters (Pinpoint 8, Lotek Wireless, Newmarket, Ontario, Canada), and data logger tags (GDL3, Swiss Ornithological Institute, Sempach, Switzerland). The GPS-only tags weighed 1.1 g and had dimensions of 22.0 mm × 11.0 mm × 4.5 mm with a posterior-extending antenna 43 mm in length. The GPS tags were programmed to record location on 8 specified dates and times. Five GPS tags also included VHF transmitters (PicoPip AG317, Lotek Wireless, Newmarket, Ontario, Canada). Those tags weighed 1.4 g, and had dimensions of 20.5 mm × 15.0 mm × 6.0 mm with an additional posterior-extending antenna 145 mm in length. VHF transmitters were intended to help establish which animals were still in the study area during the first month after attachment. Both types of GPS tags were re-chargeable and re-programmable without removing the tag from the bat. We programmed GPS tags to record nighttime locations approximately 1 hour after local sunset, and daytime locations at noon. Data from GPS tags consisted of date and time of position, an estimate of tag location in 3 dimensions, PDOP a measure of location accuracy, and the time required to acquire position information. We estimated GPS location precision as approximately ± 200 m based on tests conducted at ground level using tags that were both stationary and in motion. We also attached 1.14 g data loggers with dimensions of 24.0 mm × 10.0 mm × 4.0 mm to hoary bats. The tags included a 5 mm long light sensor that extended dorso-caudally. Data loggers recorded light-level and animal activity via accelerometer, every 5 minutes similar to those used by Liechti et al. 11 ; both values are relative, dimensionless, values. Temperature (°C) was measured at the dorsal surface of the tag every 30 minutes and was a mixture of ambient temperature and bat body temperature. Although temperature readings did not allow us to precisely determine bat body temperature, we were able to determine when hoary bats were euthermic either by comparison of tag temperature data to patterns in ambient temperature measured by weather stations (Fig. 2, Supplementary Information) and/or corresponding patterns in tag activity data indicating bat movement (Fig. 3). Tag temperature approximated ambient temperature when a bat was inactive and thermoconforming and increased above ambient when the bat was inactive but euthermic (Fig. S3) and when the tag was in direct sunlight. We attached both types of tags to bats from Sep 22-27 2014. We captured hoary bats in mist nets along the channel of Bull Creek in Humboldt Redwoods State Park, California (latitude: 40.35, longitude: − 124.01). Bats were captured in standard 2.6-m high mist nets and in a triple-high configuration with three standard mist nets stacked on top of one another. We attached tags to adult male hoary bats, which comprise > 95% of captures at this site, selecting individuals with the highest mass captured on a given night. We attached tags to the dorsum, caudal to the scapulae and cranial to the pelvis using sutures following the methods of Castle et al. 53 . Bats were released at the capture site after allowing them 20 minutes to recover from anesthesia. Bats to which tags were attached did not exhibit unusual levels of mass loss, skin irritation, or mobility while entering roosts 53 . We attempted to recapture tagged individuals using mist net surveys along the Bull Creek waterway on 19 nights between Sep 26 and Oct 19 2014, 13 nights between Oct 27 2014 and Apr 02 2015, and 22 nights between Apr 12 and 28 May 28 2015. When bats carrying GPS tags were recaptured during autumn 2014 we downloaded data and recharged and reprogrammed tags while they were attached to bats 53 . In contrast, recovery of data from data loggers required removal of the tag. Bat capture and handling were carried out in accordance with guidelines of American Society of Mammalogists 54 under permit with the California Department of Fish and Wildlife (#SC-002911). Our experimental methods were approved by the Institutional Animal Care and Use Committee of the U.S. Geological Survey Fort Collins Science Center (FORT IACUC 2014-08). Data Analysis. For data analysis we considered bats to be active when 5-min activity values exceeded a relative activity level of 6 (on a scale of 0-74), based on a comparison with activity levels logged during daylight hours when the bats were roosting. We considered the nighttime period to be the 5-minute observations between sunset and sunrise in our study area, although this will be inaccurate if bats moved > 100 km from our study area. We considered the bat to be torpid when it was inactive and within 4 °C of the temperature at a nearby weather station (Figs S1 and 2). We defined an arousal as a total increase of ≥ 3 °C in tag temperature that occurred at night in the absence of similar increase in ambient temperatures at nearby weather stations.
v3-fos-license
2023-02-18T14:46:54.166Z
2018-03-21T00:00:00.000
256963631
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-018-23023-z.pdf", "pdf_hash": "1fcbdacde9f975a670a59ceb0e2e0e1090f51a7e", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1252", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "1fcbdacde9f975a670a59ceb0e2e0e1090f51a7e", "year": 2018 }
pes2o/s2orc
An Aberrant Microbiota is not Strongly Associated with Incidental Colonic Diverticulosis Colonic diverticula are protrusions of the mucosa through weak areas of the colonic musculature. The etiology of diverticulosis is poorly understood, but could be related to gut bacteria. Using mucosal biopsies from the sigmoid colon of 226 subjects with and 309 subjects without diverticula during first-time screening colonoscopy, we assessed whether individuals with incidental colonic diverticulosis have alternations in the adherent bacterial communities in the sigmoid colon. We found little evidence of substantial associations between the microbial community and diverticulosis among cases and controls. Comparisons of bacterial abundances across all taxonomic levels showed differences for phylum Proteobacteria (p = 0.038) and family Comamonadaceae (p = 0.035). The r-squared values measuring the strength of these associations were very weak, however, with values ~2%. There was a similarly small association between the abundance of each taxa and total diverticula counts. Cases with proximal only diverticula and distal only diverticula likewise showed little difference in overall microbiota profiles. This large study suggests little association between diverticula and the mucosal microbiota overall, or by diverticula number and location. We conclude that the mucosal adherent microbiota community composition is unlikely to play a substantial role in development of diverticulosis. Results We evaluated the role of the microbiota in colonic diverticulosis among 226 patients with diverticulosis and 309 diverticulosis-free controls. As previously reported 11 , participants with diverticula were more likely to be older, male, and have a higher body mass index than those without diverticula (Table 1). In general, we found very limited to no associations between the microbiota profiles and the presence of diverticulosis. Across taxonomic levels, Shannon diversity was only significantly associated with diverticulosis case-control status at the class level p = 0.012 (FDR corrected Wilcoxon), but with an associated effect size of <1% (r-squared from Pearson correlation). Similarly, the only association between richness and diverticulosis case/ control was at the class level (p = 0.011; FDR corrected Wilcoxon) again with a r-squared <1%, (Supplementary Table 1, Fig. 1). Likewise, multidimensional scaling ordination (MDS) (Fig. 2) revealed no statistically significant differences between diverticulosis cases and diverticula-free controls. We also performed analysis at each phylogenetic level to test the null hypothesis of no association of each taxon with the presence of diverticulosis (Supplementary Tables 2-6). Across all taxonomic levels, phylum Proteobacteria and family Comamonadaceae were the only two taxa that had significant associations at a 5% FDR threshold ( Table 2). Even for these taxa, the r-squared values measuring the strength of the association were very weak with values ~2% (Table 2, Fig. 3). We conclude that even though our large sample size allowed us to find some associations, the strength of these associations is very modest despite over 500 total patients in our cohort. We were concerned that the lack of association might be because of the coarse assignment of case-control to patients who might have a range of disease severity. We therefore compared the abundance of each taxa to the total count of diverticula from each patient. At an FDR-adjusted threshold of p < 0.05, three taxa (Table 3) were significantly associated with the diverticula count, but again the effect size were very modest with r-squared values ~1%. We conclude that using the diverticula count rather than a binary case-control assignment did not substantially improve our power. We next asked whether the location of the diverticula made a difference. We separately examined the subset of patients who had diverticula in only the distal or only the proximal colon. At a 5% FDR cutoff, there were only two taxa across all taxonomic levels (genus Hallella and Delftia) that showed significant differences in patients Table 1). Table 5; n = 135 distal, n = 14 proximal). For both of these taxa, the r-squared value of the association with location was <4%. We conclude that diverticula location did not have a strong effect on the microbial community, although we may have limited power to address this question due to the small number of patients with only proximal diverticula. In addition to diverticulosis, we examined associations with a number of patient metadata (Supplemental Tables 2-5). Associations with sex and race were slightly stronger than the associations with diverticulosis. There were 25 significant taxa associated with sex (Supplemental Table 8) and 40 taxa associated with ethnicity (Supplemental Table 9) at a 5% FDR. While these hits are stronger associations than we saw with diverticulosis, they were quite modest with r-squared values of 2-3% and no taxa showing an r-squared of >6%. Correlations with waist circumference were much more modest with only two significant taxa (phylum Verrucomicrobia and genus Asaccharobacter) both of which had r-squared values of 5%. Only one taxa (class "Deltaproteobacteria") was significantly associated with age (p < 0.05). We conclude that, as has been observed in other large cohorts 12,13 , associations of patient metadata with the composition of the microbiota are modest. Discussion Colonic diverticulosis is common and the complications are costly. Because complications such as diverticulitis can only occur in patients with diverticulosis, if we could uncover the etiologic risk factors for diverticula, we could potentially prevent complications. In this large study, we found little to no difference in microbial composition between individuals with and without diverticula. Based on the large size of this study and the small effect sizes we observed, it is not likely that changes in bacterial relative abundance are responsible for the development of colonic diverticula. In addition, the presence of diverticulosis does not alter the microbial composition to a significant degree. Although bacteria have been associated with a number of gastrointestinal disorders, prior information on a bacterial etiology for colonic diverticula is limited. A pilot study of 38 subjects from Italy examined bacteria profiles in feces and mucosal biopsies 10 . Compared to controls, the patients with diverticulosis had a lower relative abundance of Clostridium cluster IV bacteria, although the difference was not statistically significant. The general microbiota composition in colonic biopsies showed no significant differences between controls and diverticulosis patients. There was a lower abundance of Enterobacteriaceae in the diverticulosis cases compared to controls and a non-significant higher abundance of Bacteroides/Prevotella. It should be stressed that this was a study assessing the microbiome of patients with incidental colonic diverticula. This is not a study of the microbiome in patients with complications of colonic diverticulosis. While a proportion of our population reported symptoms of irritable bowel syndrome and chronic abdominal pain, there is no evidence that these symptoms are associated with colonic diverticulosis, so called symptomatic uncomplicated diverticular disease (SUDD). Our group recently published a colonoscopy-based study that found no association between colonic diverticulosis and chronic gastrointestinal symptoms or mucosal inflammation 14 . As such, we did not assess the microbiome in patients with colonic diverticulosis and chronic symptoms. While we found no differences in the gut microbiota between individuals with asymptomatic diverticulosis (AD) and healthy controls, diverticulosis represents a continuum in the progression to diverticular disease. Therefore, we cannot exclude the role of the gut microbiota in the disease progression. Several small studies have reported alterations in the gut microbiota in SUDD patients [15][16][17] . Tursi et al. 18 evaluated the fecal microbiota in SUDD patients, diverticulosis patients and healthy controls. They found no overall differences in bacterial abundances between the three groups but the levels of fecal Akkermansia muciniphila was significantly higher in diverticulosis and SUDD patients. Another study found higher bacterial diversity and increased abundance of Proteobacteria in diverticulitis patients compared to controls 15 . One study assessed bacteria and fungi in diverticulitis tissue from the sigmoid colon and adjacent unaffected tissue. They observed an enrichment of Microbacteriaceae and Ascomycota in diverticulitis tissue 17 suggesting that the diverticulum microbiota may be different from adjacent mucosa. These studies implicate the gut microbiota in diverticulitis, but larger studies are needed to confirm their findings. In our study, we assessed the gut microbiota (bacteria) but we did not evaluate the fungal mycobiome because it is an emerging field that was not well characterized until recently. Our large sample size revealed some borderline significant associations, but there was little evidence of a strong association with diverticulosis. As with any negative results, we might have seen stronger association with different methods (RNA-seq, metabolomics, whole-genome metagenome shotgun sequencing). If we had corrected for multiple hypothesis testing including all hypotheses in one correction, nothing in our paper would have been significant. This again emphasizes the modest nature of the associations that we observed. We chose to examine mucosal adherent bacteria from biopsies rather than feces. It was logistically simple and safe to obtain biopsies from patients during their colonoscopy. More importantly, although there are known differences in the bacterial composition of feces and mucosal biopsies 19 , we reasoned that the adherent bacteria would be more likely to influence the colonic mucosa. All patients in the study underwent a colonoscopy prep that could change the bacterial composition. Adherent bacteria are less influenced by a purge and all patients in the study were prepped 20 . This paper has notable strengths. All subjects underwent their first colonoscopy for screening purposes rather than colonoscopy for symptoms that might be associated with diverticulosis. We systematically recorded diverticula from all colon segments. Mucosal associated bacteria were evaluated from biopsies from the sigmoid colon. The biopsies were handled in a uniform manner by technicians who were blinded to diverticulosis status. Importantly, the sample size was very large. Because the patients were drawn from a single academic medical center in the US, the results may not be widely generalizable. The pilot study by Barbara et al. reported differences in the microbial composition in symptomatic uncomplicated diverticular disease patients compared to normal controls 10 . Our study was cross sectional. If we had found substantial differences in the bacterial composition of the diverticulosis subjects compared to controls, one might question whether the differences were a consequence of the diverticula and not a cause. In the absence of pronounced differences in composition, however, this is not a concern. The sensitivity of colonoscopy for diverticulosis is not known. Endoscopists in this study were aware of the study and were accompanied by a research assistant who prompted them to report diverticula in each colon segment. Consequently the sensitivity is likely to better than during a clinical exam, but some diverticula are likely to have been overlooked. However, in analyses where we included the number of diverticula, we still found no differences. In summary, in a large study of individuals undergoing screening colonoscopy, we found little evidence of an association between adherent microbial communities and diverticulosis. Alterations in colon bacterial community composition are unlikely to be responsible for the development of colonic diverticulosis. Furthermore, the presence of diverticulosis does not appear to alter the microbial composition of the colon. Methods Participants. This cross-sectional study was designed to assess factors associated with colonic diverticulosis (NIH R01DK094738). Details of the study methods have been described previously 7,11 . Briefly, 226 case subjects with one or more diverticula and 309 controls without diverticula were drawn from outpatients undergoing first time screening colonoscopy at the Meadowmont Ambulatory Endoscopy Center, University of North Carolina Hospitals, Chapel Hill, North Carolina. The study included consented subjects 30 years and older who had satisfactory colonoscopy preparation and complete examination to the cecum. The study excluded those with a history of previous colon resection, or a prior diagnosis of polyposis, colitis, colon cancer, diverticulosis or diverticular disease. Endoscopists carefully examined the colon for diverticula in all segments and the results were recorded on special data collection forms. The number of diverticula in each segment of the colon (cecum, ascending, transverse, descending, sigmoid) was recorded and the number summed to indicate the total number of diverticula observed. Biopsies were taken adjacent to sigmoid diverticula when present or from the mid sigmoid in subjects with no diverticula. The biopsies (approximately 3-4 mm in diameter) 21 were obtained using standard (8 mm. wing) disposable, fenestrated colonoscopy forceps. Two biopsies obtained for microbiota profiling were rinsed in sterile PBS prior to freezing in liquid nitrogen to avoid contamination with fecal bacteria 22 . Laboratory personnel were blinded to clinical information and diverticulosis status of subjects. The study was approved by the University of North Carolina Office of Human Research Ethics. All participants gave informed consent. Enrollment of participants and laboratory experiments were performed in accordance with the relevant guidelines and institutional regulations. DNA Extraction, PCR and sequencing. We extracted bacterial genomic DNA from mucosal biopsy specimens as previously described 23,24 . Briefly, normal biopsies from each patient were placed in lysozyme for 30 minutes followed by bead beating and DNA extraction (Qiagen DNeasy Blood and Tissue, kit cat # 69504). The DNA fractions were eluted in 30 μl of elution buffer and stored in aliquots at −20 °C. Illumina library creation was performed using two separate PCR reactions according to a previously published protocol 25 . The first-step PCR (PCR1) contained primers designed to amplify the V2 region of the 16S bacterial rRNA gene and Phusion High-Fidelity Master Mix (Life Technologies, Carlsbad, CA). PCR1 product was diluted 20-fold and used as a template for second-step PCR (PCR2). PCR2 primers contained an Illumina index barcode sequence, Illumina adapter sequence and a tag sequence. There were two sets of PCR2 primers, and each PCR2 reaction received one of each, resulting in a dual-indexed product. One reaction was performed for each sample using Phusion High-Fidelity Master Mix. PCR product was visualized by E-Gel 96 to check samples for amplification. All samples with positive amplification were normalized to 25 ng/µl using the SequalPrep Normalization Kit (Life Technologies, Carlsbad, CA), and an equal volume of each sample library was pooled followed by cleaning using AxyPrep Mag Beads 25 . The pool was stored at −20 °C, then shipped to the University of Maryland Institute for Genome Sciences for sequencing using the Illumina MiSeq protocol 25 . Appropriate positive and negative controls were included in all sample preparation steps. A pooled sample of known bacteria served as positive control. Sequence processing and statistical analysis. Although producing adequate DNA can be challenging from biopsy samples, >90% of these samples had at least 1,000 reads assigned by different taxonomy algorithms (Table 4, Suppl. Figure 1) and these samples were used for downstream analysis at each taxonomic level. Forward reads were de-multiplexed and ran through version 2.10.1 of the RDP classification algorithm 26 . at a 50% confidence score (Table 1) or pick_closed_reference_otus.py script in QIIME 1.91. Read counts were log normalized as previously described 20 . The alpha-diversity and richness measurements were performed using the functions "diversity" and "rarefy" from the vegan package in R, with the subsample size of "rarefy" set to the minimum number of sequences detected in any sample. MDS ordination was performed with Bray-Curtis dissimilarity using the vegan package in R. Log-normalized abundance values for each taxon at the phyla, class, order, family and genus levels (RDP algorithm) or OTU were evaluated with a series of linear models and non-parametric tests. P-values were corrected for multiple hypothesis testing using B & H FDR correction 27 with correction occurring separately for each test at each taxonomic level. To preserve power, statistical tests were only constructed for taxa that were present in at least 25% of all samples. All linear models and statistical tests were conducted in R. The R code used is available here: https://github.com/afodor/metagenomicsTools/blob/master/src/scripts/topeOneAtATime/metadataTests.txt Each linear model took the form of: Where "Y" is the alpha-diversity, richness, MDS axis or log normalized abundance and the metadata is the case/ control status (for a two-factor one-way ANOVA), sex (for a two-factor one-way ANOVA), or race (white, black or other for a three-factor one-way ANOVA) or tics count (for a linear regression) or waist circumference (for a linear regression). As indicated in the text, non-parametric equivalents to linear models were used to generate p-values including the Wilcoxon test for two-factor metadata, Kruskal-Wallis test for multi-factor metadata, and the Kendall test for association of two quantitative variables. In order to ensure that our results were not a consequence of our use of the RDP algorithm, we performed t-tests comparing case and control status for each taxa at the genus level with both the RDP algorithm and with the OTUs from the QIIME pipeline. The inference produced from these two classification schemes was highly concordant (Supplementary Fig. 1) demonstrating that our results are robust to our choice of classification scheme. Data Availability. The datasets generated from this study are available from the corresponding author on request. Raw sequences are available in the NCBI SRA data repository via submission SUB3467354 under Bioproject PRJNA429136. Table 4. Number of sequences identified by the RDP classification algorithm*. *The number of sequences identified by the RDP classification algorithm at a threshold of 50% (for phylum through genus) or were assigned to an OTU in QIIME 1.91. Almost all 226 case and 309 control samples had at least 1,000 sequences per sample (last column) and these samples were used for analysis at each phylogenetic level.
v3-fos-license
2020-11-05T09:09:45.104Z
2020-10-28T00:00:00.000
228964463
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/12/21/8939/pdf", "pdf_hash": "247dd682e4bcd383b1ce2becdf32e94cb5853444", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1253", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "350566bdb83eb5b12988b17999db6c8ae4e28cd2", "year": 2020 }
pes2o/s2orc
Changing Agricultural Landscapes in Ethiopia: Examining Application of Adaptive Management Approach Ethiopia has decades of experience in implementing land and water management interventions. The overarching objectives of this review were to synthesize evidences on the impact of implementation of land and water management practices on agricultural landscapes in Ethiopia and to evaluate the use of adaptive management (AM) approaches as a tool to manage uncertainties. We explored how elements of the structures and functions of landscapes have been transformed, and how the components of AM, such as structured decision-making and learning processes, have been applied. Despite numerous environmental and economic benefits of land and water management interventions in Ethiopia, this review revealed gaps in AM approaches. These include: (i) inadequate evidence-based contextualization of interventions, (ii) lack of monitoring of bio-physical and socioeconomic processes and changes post implementation, (iii) lack of trade-off analyses, and (iv) inadequacy of local community engagement and provision of feedback. Given the many uncertainties we must deal with, future investment in AM approaches tailored to the needs and context would help to achieve the goals of sustainable agricultural landscape transformation. The success depends, among other things, on the ability to learn from the knowledge generated and apply the learning as implementation evolves. Introduction Geological weathering and erosion are constructive natural processes that maintain the functioning of agricultural landscapes and ecosystem services [1][2][3]. Anthropogenic drivers often accelerate some of these natural processes and can negatively affect the structure and functions of agricultural landscapes and ecosystem services [1,3,4]. A recent study by Nkonya et al. [5] demonstrated that about 30% of the global land area, home to about three billion people, suffers from land degradation. This is translated to an annual cost of about USD $300 billion. Land degradation is particularly severe in sub-Saharan Africa (SSA), which accounts for about 22% of the total global cost of land degradation. Like in other sub-Saharan African countries, land degradation is significant in Ethiopia and causing considerable negative environmental and economic impacts [6][7][8][9][10]. For example, the direct cost of the loss of soil and essential nutrients due to unsustainable land management was estimated in 1994 to be 3% of the country's agricultural GDP, or USD $106 million [11]. Gebreselassie et al. [12] estimated the net cost of land degradation in Ethiopia due to land use and land cover changes to be about USD $4.3 billion annually and this value is 44 times higher than the 1994 estimate. The challenges of land degradation in Ethiopia entail the need to transform and restore the agricultural landscape by addressing the drivers of land degradation while maintaining or increasing ecosystem services. According to the World Bank [13], this would play an important role in reducing poverty. It was estimated that every 1% growth in Gross Domestic Product (GDP) would result in 0.15% reduction in poverty. Economic growth in the agricultural sector plays even a more important role: for every 1 % increase in agricultural output, poverty would decrease by 0.9%. In relation to the management of agricultural landscapes in Ethiopia, Haregeweyn et al. [14] showed that indigenous soil and water conservation (SWC) measures have been applied for centuries but improved SWC measures only came into practice following the recurrent drought-triggered famines of the 1970s and 1980s. The implementation of indigenous and improved SWC measures can support addressing new challenges such as climate change impacts and can be considered as a socio-political opportunity for better livelihood outcomes [14]. In context of the current study, indigenous SWC measures refer to locally developed and practiced land and water management technologies (e.g., Konso stonewalled terrace), whilst improved SWC measures are newly introduced or when the design and implementation of indigenous SWC measures have been improved through science. In this regard, Sayer et al. [15] demonstrated the need to adopt knowledge-intensive and site-specific sustainable agricultural landscape management options including the use of indigenous and improved SWC practices. Regardless of its type, SWC measures could add new structure to or change existing structures of landscape and thereby influence the landscape functions and the overall processes of landscape transformation. Birge et al. [16] argued that managing agricultural landscapes through land and water management practices (e.g., SWC measures) can take unpredictable trajectories and trigger unintended results (e.g., environmental pollution, land use conflicts); therefore, it must take into consideration temporal and spatial process variability and be aligned with the socio-political context. This entails mechanisms to operationalize adaptive management (AM) approaches. Adaptive management is an approach to natural resource management for people who must act despite uncertainty about what they are managing and the impacts of their actions [15][16][17]. The adaptive process is often represented as a cycle of plan, do, monitor, and learn. The overarching objectives of this review were to synthesize evidences on the impact of implementation of land and water management practices on agricultural landscapes in Ethiopia and to evaluate the use of adaptive management (AM) approaches as a tool to manage uncertainties. The review employed generic AM cycles as proposed by Birge et al. [16]. Context of Transforming Agricultural Landscapes in Ethiopia and Analytical Framework Applied in This Review A landscape is perceived as a system of natural, biophysical, and socio-cultural components that undergoes continuous transformation due to both natural and anthropogenic drivers [18,19], as presented in Figure 1. The natural processes ( Figure 1C) involve, for example, the pedogenic process influenced by compounded effects of the lithosphere, hydrosphere, and biosphere as well as climate. The second, human-induced process ( Figure 1A), is triggered by socioeconomic, cultural, and political interests [1]. Human social systems and landscape ecosystems are complex adaptive systems [20]: complex because ecosystems and human social systems have many elements and non-linear and dynamic connections between those elements ( Figure 1); adaptive because they require feedback mechanisms with adaptive decisions/actions to a constantly changing environment [19]. In the context of this review, Ethiopia's agricultural landscapes are considered a mosaic of farmers' fields, infrastructures (e.g., terraces, micro dams) and occasional natural habitats, and they are the result of interactions between farming activities and the natural and socioeconomic settings in an area [19,21]. The ongoing implementation of land and water management practices and the resulting transformation of agricultural landscapes in Ethiopia is attributed to both natural and anthropogenic drivers [14,22]. People modify the landscape by changing its structures (e.g., by installing SWC measures, planting or cutting trees, building micro dams, extracting groundwater, changing land use, etc.) to attain improved landscape functions and support their livelihood [19]. In many cases, the focus of human-induced processes is on increasing provisioning ecosystem services (e.g., food production). Such a focus on a single ecosystem service can have negative feedback (Figure 1) on transforming landscapes and maintaining the diverse ecosystem services that multi-functional landscapes can provide. Following the objectives of this review, we focused on human-induced landscape transformations. We focused on transformation within agricultural landscapes and therefore, changes from agriculture to urban or vice versa were not considered. The analytical framework and key indicators used are illustrated in the next section. According to Forman and Godron [20] landscapes have three user-defined components: 1. Structure--a spatial pattern of landscape units, i.e., the spread of plants and animals, arrangement of landscape elements, land-use and land-cover (LULC), artificial structure etc.; ( Figure 1B). 2. Function-the interactions between the landscape units, i.e., water, nutrient and energy fluxes, migration of organisms ( Figure 1B), and usually used synonymously with ecosystem function and 3. Changeability-transformation of landscape structure and functioning over temporal scales. Human-induced and natural processes act continually on the first two components while the third, changeability, is an integral part that is resultant of these actions and reactions (and thus not represented separately in Figure 1). Helming et al. [23] indicate a three-layered hierarchy of landscape structures ( Figure 1B) that are introduced here regarding human-induced transformations: 1. Primary landscape structure (Figure 1(B1)): This is the original and permanent basis for the other structures. Although it is least influenced by human activities, the primary landscape structure shapes the type and magnitude of interventions and their outcome under secondary and tertiary landscape structures. Hence, it is considered as intrinsic to interventions and not discussed in further detail in this review [18]. 2. Secondary landscape structure (Figure 1(B2)): According to Skokanová and Eremiášová [24], this layer involves, for example, the current LULC or the geographical elements created to improve productivity, such as a dam or SWC measures. Given their multiple ecosystem functions and their pervasiveness in landscape transformation, LULC changes, SWC, and water harvesting-related indicators were key focus areas of this review [25]. We will use both quantitative and qualitative information to illustrate changes in ecosystem functions and services (e.g., biodiversity, soil erosion, agricultural productivity, carbon sequestration etc.) due to changes in secondary landscape structure. 3. Tertiary landscape structure (Figure 1(B3)): This layer comprises mainly of elements of the socioeconomic sphere such as (in) tangible interests, and expressions of and effects on society in the landscape [23]. Here, we focus on examples illustrating livelihood transformation in relation to secondary landscape structural changes such as income from agricultural activities (irrigation from micro dams, income from afforestation, rainfed farming intensification) and how different SWC measures have transformed positively or negatively livelihoods in the community. In this analytical framework, the landscape structure that relates to agricultural landscape transformation and land and water management practices belongs to the secondary layer (Figure 1(B2)). Some examples include expansion of cultivated land, exclosures for landscape restoration, physical SWC measures, and water harvesting structures. These are typical activities undertaken to manage agricultural landscapes in Ethiopia [14,23,26]. Tertiary landscape structures (Figure 1(B3)) are linked to landscape structure 2 (B2) and how it transforms livelihoods. Recent evidence related to how landscape structures B2 transform lives and livelihoods [27] is an important point of discussion, particularly in view of providing incentives to guide behavioral change and to support local communities in adopting certain land and water management interventions. The functional component (Figure 1(B4)) of a landscape consists of the processes influenced by its structure and processes driven by the human and natural system. This component is synonymous with ecosystem function and controls the provision of ecosystem services ( Figure 1D). In the introduced analytical framework, changes in the landscape function are driven by the changes in structure. For example, we consider how changes in the landscape structure such as LULC changes have influenced carbon sequestration, biodiversity, and erosion [28], or how SWC measures have influenced the restoration of degraded landscapes and agricultural production. how landscape structures B2 transform lives and livelihoods [27] is an important point of discussion, particularly in view of providing incentives to guide behavioral change and to support local communities in adopting certain land and water management interventions. The functional component (Figure 1(B4)) of a landscape consists of the processes influenced by its structure and processes driven by the human and natural system. This component is synonymous with ecosystem function and controls the provision of ecosystem services ( Figure 1D). In the introduced analytical framework, changes in the landscape function are driven by the changes in structure. For example, we consider how changes in the landscape structure such as LULC changes have influenced carbon sequestration, biodiversity, and erosion [28], or how SWC measures have influenced the restoration of degraded landscapes and agricultural production. [23] and Hermann et al. [19]). LULCC stands for land-use and land-cover changes. Landscape structure and functions are highly interconnected. To gain an in-depth understanding of these, selection of the right scale is important considering that the spatial and temporal scales of the processes and observations need to be aligned. However, availability of quality data, both spatial and temporal, is often limiting. This work considers evidence generated at different scales (farm plots, watersheds, landscapes, and basins) and consolidates the implications at the national scale. Likewise, temporal scale information is fragmented too. However, assessment of interventions in terms of their long-term impact on the performance of a landscape in delivering a broad range of benefits including transformation of livelihoods and ecosystem services is very scarce in Ethiopia as in many developing countries. Therefore, establishing empirical evidences of temporal trends for the target indicators is beyond the scope of this review. Data Sources Data were collected from three major sources: peer-reviewed articles included in the Scopus and ISI Web of Science databases, grey literature, and expert knowledge following discussions in the [23] and Hermann et al. [19]). LULCC stands for land-use and land-cover changes. Landscape structure and functions are highly interconnected. To gain an in-depth understanding of these, selection of the right scale is important considering that the spatial and temporal scales of the processes and observations need to be aligned. However, availability of quality data, both spatial and temporal, is often limiting. This work considers evidence generated at different scales (farm plots, watersheds, landscapes, and basins) and consolidates the implications at the national scale. Likewise, temporal scale information is fragmented too. However, assessment of interventions in terms of their long-term impact on the performance of a landscape in delivering a broad range of benefits including transformation of livelihoods and ecosystem services is very scarce in Ethiopia as in many developing countries. Therefore, establishing empirical evidences of temporal trends for the target indicators is beyond the scope of this review. Data Sources Data were collected from three major sources: peer-reviewed articles included in the Scopus and ISI Web of Science databases, grey literature, and expert knowledge following discussions in the agricultural water management platform in Ethiopia. The terms used to search for literature separately and in combination included 'landscape', 'landscape transformation', 'ecosystem services', 'sustainability', 'conservation and development', 'land use change', 'exclosure', 'soil erosion and sedimentation', and 'carbon sequestration'. Where relevant, we specified Ethiopia in these searches. From 71 articles identified, 26 were on the general scientific background of agricultural landscapes and their transformations and 45 were specific to Ethiopian agricultural landscapes. Tables 1 and 2 indicate how often elements of AM were mentioned in the selected literature sources, either as recommendation or as gaps for sustainable landscape transformation and number of incidents where adaptive management (AM) elements were mentioned for each of the target indicators respectively. It also provides the number of cases where AM elements were mentioned (directly or implicitly) for each of the target indicators (as in Figure 1(B2, B3)). We observed that different elements of AM were mentioned 142 times, and more than 80% of these were for indicators under the structural landscape component. Many articles note gaps in one or more elements of the AM approach for respective agricultural landscape transformation interventions. Some scholars connected structures and functions (Table 1) of landscapes in a cause and effect relationship. Therefore, many articles were assigned to multiple indicators. Adaptive Management in Relation to Landscape Transformation Adaptive management and the theory of change for landscape approaches [15,16] are comprehensive and complementary frameworks related to landscape interventions and restoration of ecosystem services. According to Sayer et al. [15], a theory of change traces the links between an intervention and an ultimate impact and makes the assumptions underpinning prediction of the result explicit. The theory of change demonstrates the causal pathway and feedback loops driving progress towards improved landscape performance. Furthermore, the studies noted that metrics are needed at multiple stages throughout the process to understand progress and to inform policy and decision-making. The concept of AM has evolved in numerous directions, but all are centered around iterative learning about a system and making management decisions based on that learning [17,29]. The learning components focus on science (e.g., monitor, evaluate, and adjust) while the others focus more on structured decision-making by defining the problem, identifying objectives, formulating evaluation criteria, estimating outcome, evaluating trade off, and deciding [16]. The adaptive process is often represented as a cycle of plan, do, monitor, and learn and can guide informed decision-making while implementing activities related to landscape structural changes and also helps to address post-implementation trade-offs ( Figure 1). We used elements representing both structured decision-making and the learning components to better understand the drivers and outcomes in the entire landscape (Table 1). In this line, Stirzaker et al. [29] argue that using real-life management of the system as a whole and turning it into an experiment by asking the right questions, implementing decisions, collecting the right data, and learning from the experience, is crucial to understanding landscape transformations. The attributes of AM, which make it distinct from the traditional trial and error approach, is so that it involves exploring alternative ways to meet management objectives. It forecasts the outcomes of alternatives based on the current state of scientific knowledge and it implements one or more of these alternatives. Adaptive management monitors impacts of management actions, updates knowledge, and adjusts management decisions. Table 1. Matrix matching the framework of target indicators of landscape structures B2 and B3 ( Figure 1) and key elements of the adaptive management approach Birge et al. [16]. Indicators for Soil and water conservation structures and practices X stands for cases where reviewed literature indicated application or lack of adaptive management (AM) triggered bad or good performances of the focus indicators. Indicators under functional elements do not follow the whole AM approach as they are the result of changes in the structure and of external drivers and therefore marked by *. In this review, we argue that the principles of AM can be applied to the concept of landscape transformation because of agricultural landscape intensification. We argue that the identification and monitoring of relevant indicators representing key landscape structures and functions support adaptive learning and decision-making in sustainable agricultural intensification. We apply the developed framework in the context of degraded landscapes and the implementation of physical and biological SWC measures. Table 1 matches the framework of target indicators (B2 and B3 in Figure 1) with key elements of the AM approach [16]. For each of the selected indicators, we explored if the literature reviewed attributed the failure or success of the interventions to one or more elements of the AM approach in Ethiopia [16]. The contributions of this review include: (i) the proposed analytical framework and demonstrating its applicability to target indicators and understanding landscape transformation for specific areas and this can be applied elsewhere, for example in SSA. (ii) Presentation of structure and function of landscape transformation interactively and relating each of them with the learning and decision-making elements of adaptive management in the Ethiopia context. Land-Use and Land-Cover Changes: Expansion of Cultivated Land Changes in LULC globally are driven by multiple factors including population increase, poverty, economic activities, and other socioeconomic factors [30]. Land-use and land-cover changes are so pervasive that when aggregated globally, they significantly affect key aspects of the landscape structure [28]. Several studies by Kindu et al. [31], Gashaw et al. [32], and Deribew and Dalacho [33] documented changes in LULC and ecosystem services across time in Ethiopia. Many of them, however, did not enumerate comprehensive nationwide evidence. Available information at the micro scale (e.g., farm fields, watersheds) and meso scale (river basins, regions) show, however, that the magnitude of change is enormous, and that the direction of change varies across regions and scale of studies. For example, from a 145-year analysis of the situation in the northern highlands of Ethiopia, Nyssen et al. [22] concluded that the landscape is greener now. The findings of Gebremichael et al. [34] from their work in the Blue Nile basin concluded that erosion and sedimentation increased by 81% due to increased land conversion to crop land. The only recent national-scale land-use change study involving agricultural land expansion (2000-2010) was done by the United Nation Convention to Combat Desertification (UNCCD) [35]; it showed only a 0.38% decline in forest cover with a proportionate increase in crop land and shrub, grasslands, and sparsely vegetated areas. Despite the small change from natural land cover to cultivated land as illustrated by the national-scale work [35], contrasting values available from meso and micro scale studies imply that there are hotspot areas of LULC change where ecosystem functions and services are degrading rapidly [36]. As summarized in Table 2, the total number of incidents that AM elements mentioned as recommendation or gaps in current LULC practice was 42. Of this, more than 70% related to the element of structured decision while the rest related to learning components of adaptive management. Overall, from an agricultural land perspective, three major causes of LULC changes can be recognized: (i) expansion of agricultural land due to individual farms encroaching into other land-use types; (ii) foreign direct investment (FDI) in agriculture; and (iii) restoration of degraded lands through physical soil conservation measures integrated with tree planting and biological soil conservation measures through, for example, exclosures. (i) Expansion of agricultural land due to activities by local farm investments Studies indicated that agricultural lands have been increasing in different parts of the country at the expense of forested land, grassland, and shrublands. For example, a study conducted in the central highlands of Ethiopia [37] demonstrated a 62% increase in cropland between 1975 and 2014, which has mainly occurred at the expense of grasslands. Similarly, Derebew and Dalacho [33] showed that over the course of 60 years (1957-2017), agricultural land and forest land showed a comparably equal extent of net change (+36.7% and −37.8%, respectively), but in opposite directions. Such changes in agricultural lands had resulted in an increase in total crop production over the past decade [38]. However, studies by the World Bank [13] and Bachewe et al. [39] showed that the relative contribution of agricultural land expansion to increases in agricultural production was decreasing in the period 2005-2015, which can be explained by the gain in yield due to increased use of fertilizers, herbicides, improved seeds, and irrigation. This finding was supported by Franks et al. [38] who found that there had been a tendency of production gains accruing from higher land productivity rather than an expansion of cultivated land. Similarly, the World Bank [13] reported that the agricultural sector of Ethiopia recorded a remarkably rapid growth in the past decade, and that this was the result of strong yield growth as well as an increase in cultivated area, which rose by 7% and 2.7% per year, respectively, during the period 2004-2014. Kibret et al. [40], from their LULC change study in south and central Ethiopia, concluded that land conversion to agriculture in that part of the country may have reached a cut-off point beyond which it would have ecological consequences. Headey et al. [41] also argued that with little suitable land still available for expansion of crop cultivation, especially in the highlands, future cereal production growth would have to come from yield improvement. Despite the presence of some areas where production still depends on expansion of cultivated land, many of the evidences above [13,[39][40][41] suggested that future direction of agricultural productivity increase in Ethiopia could be intensification. In order for intensification and extensification to be sustainable, tools such as AM can be useful. It guides the process to assess the underlying trade-offs and seek options for optimal management choices under conditions of uncertainty. (ii) Foreign direct investment in agriculture Despite the few studies that explore the nature and benefits of FDI in Ethiopia, Mulue et al. [42] reported that between 1992 and 2017 an investment in 122 projects (8.8 ETB (about 2.6 billion USD)) was recorded: the third largest areas of FDI following manufacturing and contracting. Bossio et al. [43] indicated FDI in agriculture with close to 2 million ha of disclosed contracts for lease of land. Many of these land areas were in Gambela, Beni-Shangul Gumz, and in Oromia regional states. The aim of such investments was to increase provisioning ecosystem services (food production), technology transfer, job creation, and flow of capital into the country [42]. Although only a smaller portion of the 2 million ha of land has been put in practice, scholars argue that the environmental sustainability in agricultural production is a major issue in the context of large-scale FDI in agricultural land [42]. Intensive agricultural production has negative impacts on biodiversity, forest, land, soil, and water resources. In this regard, Teklu et al. [44] reported the emerging threat of pesticide pollution of water resources in the Rift Valley: areas where flower farms and intensive irrigated agriculture are practiced partly through FDI. Overall, limited empirical evidences have been gathered on the opportunity costs (e.g., environmental impacts, human health) of such land-use change. Bossio et al. [43], who examined the impacts of FDI on water resources, indicated a potential increase in the consumptive use of freshwater resources, thus straining the already scarce freshwater resources although the investment may indeed enhance land and water productivity. Here, there can be multiple suggestions in relation to AM to mitigate the negative impacts of such landscape transformation measures: (i) identify areas with the least opportunity cost; and (ii) systematically monitor the emerging changes in landscape structure and functions in order to contribute to evidences supporting the AM cycle and application of the knowledge in future development endeavors [42]. (iii) Restoration of degraded landscapes through physical and biological soil conservation measures Since the 1970s and 1980s, several national programs, including the Sustainable Land Management (SLM) program (phases I and II) and the Productive Safety Net Program (PSNP), supported the implementation of SWC measures in the country. For example, during the period 2010-2015, more than 15 million people contributed unpaid labor (equivalent to USD 750 million each year) to the SLM program [45]. During this same period, SWC measures have been introduced in more than 3,000 watersheds and more than 12 million hectares of land have been rehabilitated by implementing physical different soil and water conservation measures [45,46]. There are a number of biological and physical SWC measures currently applied across Ethiopia. The practices are mostly single technology focused or occasionally integrate physical and biological measures. The most common physical SWC measures are gully rehabilitation, furrow, check dams, waterway, fanya juu, drainage ditches, cut off drains, bunds of different type (stone, soil, or combined), terraces, counture ploughing, and water harvesting. The biological SWC measures involve practices such as alley cropping, grass strips, afforestation, and exclosure. Exclosures are usually community-initiated practices on degraded grassland with shallow soil and increasingly have become a space for integration of physical and biological conservation measures. Studies demonstrated that the implemented physical SWC measures played an important role in rehabilitating degraded landscapes and improving ecosystem services [27,[47][48][49]. For example, the transformation of these landscapes through SWC measures the resulting increases in water retention and ground water recharge. This provides opportunities to support supplementary or full irrigation in rainfed or dry season agriculture, respectively [50]. Shallow ground water with less than 20 m depth incurs less cost and is easier to extract, thus it can be an incentive to invest in SWC. Estimates show that shallow ground water can irrigate as much as 8% of total irrigable land in Ethiopia [51]. Gowing et al. [50] argue that increased groundwater recharge and availability of shallow groundwater is an opportunity for intensification of agriculture and ecosystem services. Sustainable exploitation of this opportunity, however, needs careful monitoring of impacts of water abstraction, use, and impacts on water quality, which are implicitly linked to application of AM approach. Of the biological soil conservation measures, establishment of exclosures on degraded landscapes has been given more emphasis due to its multiple benefits [52][53][54][55]. Exclosures are areas protected from the interference of humans and livestock to promote natural regeneration of secondary vegetation. Recent estimates indicated that more that 4.2 million hectares of land in the country are covered by exclosures [27]. Ethiopia recently pledged to rehabilitate 15 million ha of degraded land by 2030 [45] and, according to the government's plan, about 50% of the land-over 7 million ha-will be rehabilitated by establishing exclosures [45]. However, local communities raise concerns about the long-term soil conservation approaches and technologies discussed above, as the measures are not effective in generating short-term economic benefits [27]. The critical questions, therefore, are: (i) how would this land and water management, specifically SWC measures, work for poor rural communities? (ii) How well are farmers organized and enabled for taking collective action? (iii) What are the incentives and requirements to support local communities to adopt long-term conservation approaches? A recent work by Mekuria et al. [27] proposed a business model scenario to explore the feasibility of exclosures and address the complex challenges related to implementation. These business models identified short-term revenue streams such as beekeeping, harvesting fodder for livestock fattening, and cultivating high-value plant species, including fruit trees and herbs. These are feasible, sustainable economic activities that could allow for the restoration of ecosystem services over the long term if anchored to the principles of AM. The other challenge is that the implementation of SWC measures in agricultural landscapes lacks monitoring, stakeholder's engagement, and longer-term impact assessment and that the approach in general lacks the learning ingredients of the AM cycle [16,17,56]. The fact that impacts of SWC measures are a function of time requiring context-specific intervention, development of a matrix of evaluation criteria and involvement of the local community, as illustrated in the AM cycle and theory of change, is crucial [16,17]. As summarized in Table 2, the total number of incidents that AM elements were mentioned as recommendation or gaps in current SWC practice were 34. Of this, 71% of the count was under the element of structured decision while the rest related to learning components of AM. Recent initiatives by the World Bank to include hydro-meteorological monitoring systems as part of the SLM (phase III) in Ethiopia, might in part be a response to such criticism. Some of the key gaps such as the lack of evaluation, contextualization of interventions, and assessment of outcomes of SWC measures in relation to livelihood improvement are summarized in Table 3. Small Water Harvesting Structures Expanding water harvesting structures is one of the adaptation mechanisms necessary for transforming landscape structures for better ecosystem service provision in the face of climate change. As concomitant benefits, water harvesting can reduce surface runoff and erosion and recharges ground water. Accordingly, many regional and national governments introduced the implementation of water harvesting structures to improve livelihoods and adapt to climate change since the 2000s [58,59]. However, the impacts of implemented water harvesting structures (such as farm ponds and micro dams) on livelihoods are constrained by siltation, seepage losses, insufficient flows, structural damage, and spillway erosion [60]. In this regard, Gebremedhin et al. [60] showed that 61% of the water harvesting structures constructed in northern Ethiopia had siltation problems, 53% suffered from leakages, 22% had insufficient inflows, 25% were handicapped by structural damage, and 21% faced spillway erosion problems. Furthermore, lack of benefit sharing mechanisms hampered improving equity, as in most cases better-off farmers benefited more than poor farmers [61][62][63]. This suggests that the location, design to improve seepage losses, construction, and maintenance to combat siltation as well as governance of these structures need to be improved. These substantiate evidences summarized in Table 2 illustrate the highest total number of AM elements mentioned as recommendation or gaps in current water harvesting structure and practices (43). Of the total counts of the incident, 81% of the count was related to the element of structured decision-making (Table 2). Despite the huge potential, both in terms of available runoff and land resources, what has been achieved and recorded in this respect until now is limited and many interventions related to small water harvesting structure and practices did not meet the expectation of the farmers and there are several cases of dis-adoption. We argue that enabling AM and incorporating elements of the impact pathway, as suggested by Sayer et al. [17] and AM as suggested by Birge et al. [16], would be a good starting point to overcome some of the bottlenecks. Adaptive management demands early community engagement, understanding trade-offs, and monitoring of changes and impacts and learning therefrom [64]. Therefore, if tailored to context and adopted, it could mitigate the negative environmental, economic, and social consequences of small water harvesting interventions currently observed. Livelihood Transformation through Natural Resources Management and Agricultural Activities As indicated in Figure 1, the tertiary layer of agricultural landscape structure focuses on how agricultural and natural resource management-related activities are transforming livelihoods. We use the Productive Safety Net Program (PSNP), where millions of farmers participate each year, as an example, to illustrate the key role the AM approach could play in sustaining the impacts of natural resource management interventions on livelihood transformation. One of the major components in PSNP is a public works program. Under this program, eligible households with able-bodied adults are enrolled into the public works program, which involves enhancing agricultural landscape structure (soil conservation structure). These public works activities occur for 6 months of each year, during which clients receive a salary based upon their household size. Public works clients are expected to graduate from the program when they gain sufficient assets. Relating this to AM would require us to answer the following questions: (i) whether sufficient evidence is available on how the implementation of different land and water management interventions under PSNP transform the three livelihood clusters, (ii) what learning has been generated from the evidence and (iii) what learnings have been used to plan and design next phases of the program. In the case of PSNP, many studies have focused on the impact of payments made to program participants on wealth accumulation and local infrastructure development (e.g., roads, schools, etc.) rather than on the actual longer-term environmental and livelihood impacts of these interventions [65]. This clearly illustrates inadequate knowledge management efforts on how investments in agricultural landscapes are transforming livelihoods and the local economy. Several other land and water management programs also lack short-term and long-term evidences of impacts on smallholder livelihoods. This is supported by the proportion of learning elements of AM (from the total number of counts of incidents mentioning AM elements) mentioned as a recommendation or gaps in the current livelihood transformation-related indicators (56%; figure not indicated in Table 2). Table 3. Examples of recent agricultural LULC studies in Ethiopia and the key gaps they discussed in relation to the adaptive management cycle. Authors Focus Issues Spatial and Temporal Scale Key Conclusion Examples of Reflection on AM Nyssen et al. [22] Land-cover change 145 years; northern Ethiopian mountains by re-photographed 361 landscapes that appear on historical photographs (1868-1994) The northern Ethiopian highlands are currently greener than at any time in the last 145 years. Lack of explicit evidence on trade-off outcomes and contextualization of the problem-example eucalyptus dominated LULC. Tadesse et al. [28] Land-use land-cover change and erosion The only available comprehensive information, on how agricultural activities improve livelihood and level of poverty, is reported by the World Bank [13]. Using the international poverty line (USD $1.90 per day at 2011 purchasing power parity (PPP)) as a yardstick, poverty is reported to have fallen from 55.3% in 2000 to 33.5% in 2011. A decomposition of yield increase reveals the importance of increased input use (e.g., improved seeds and agrochemicals) as well as total factor productivity growth (ratio of aggregate output (e.g., GDP) to aggregate inputs (2.3% per year)). A doubling of the adoption of improved seeds and fertilizer played a major role in sustaining higher yields. Ethiopia's real GDP has tripled since 2004, although it remains well below regional and low-income country levels. Recent work by Sheahan and Barrett [63] revealed that less than 4% of farm households in Ethiopia use integrated inputs consisting of inorganic fertilizer, irrigation, and improved seed varieties, which implies that there are untapped productivity gains to be made from coordinated modern input use by deploying governance mechanisms (and improved knowledge and skills, supply chains, business models etc.) to promote its uptake. This supports the claim discussed earlier that future Ethiopian food production largely depends on intensification; it also implies the need for an integrated and evidence-based approach, supporting AM, to ensure a sustainable intensification pathway [16]. Adimassu et al. [10] summarized the impacts of SWC practices on the grain yield of crops. These same authors indicated that the impacts of SWC measures on grain yield are divergent and influenced by the type of SWC measure. The authors concluded that most of the physical SWC measures were less effective in enhancing grain yield of crops and attributed the reduced yield to the trade-off of increased area the SWC structures occupied [10]. In high rainfall areas there was higher likelihood of waterlogging, which contributed to the reduced yield implying lack of contextualizing interventions as suggested in AM [16]. Examples of Transformed Landscape Functional Indicators Several other studies have demonstrated that the implemented SWC measures had positive impacts on reducing surface runoff and sediment load. For example, a study conducted in north western Ethiopia by Dagnew et al. [66] showed that SWC practices significantly reduced the daily, monthly, and annual runoff and sediment load compared to untreated lands. Zegeye et al. [67] reported that gully head treatment reduced surface runoff by up to 42% compared to the runoff generated from untreated gullies. Gebremichael et al. [34], who illustrated trends of the Blue Nile flow and sediment load at the outlet of the Upper Blue Nile basin at El Diem station, reported statistically significant increasing trends of annual stream flow, wet-season stream flow, and sediment load at 5% confidence level. The dry-season flow showed a significant decrease. However, during the same period, annual rainfall over the basin showed no significant increase. The counter-intuitive finding is why larger basin wide impact assessment (e.g., Gebremichael et al. [34]) showed an increase in trends of sediment yield and runoff, while SWC measures are proven to be effective at smaller scales as suggested for example by Dagnew et al. [66]. This could be explained by the fact that the overall area where SWC measures were applied is still small and that the total basin runoff is controlled by the over-proportional increase of runoff, which comes from non-SWC areas. One of the lessons in terms of application of the AM cycle is that most of the SWC measures lack proper geographic, plot-level, and social targeting [10], thus, the positive impact on the landscape ecosystem functions (runoff, sediment yield) is low. This is counter-intuitive given decades of experiences of SWC research and generation of many context-specific technologies and guidelines in Ethiopia. We argue that while generating contextualized SWC technologies is a key step, presence of the right institutions is what enables the use of these technologies and adoption of AM practices. In Ethiopia, several gaps related to institution and policy that explain lack of contextualization of SWC measures can be enumerated. These involve, for example, organizational instability; inefficient organizational structure (due to understaffing, under equipping), lack of linkages and alliances between institutions; shortage of skilled manpower, inadequate office and workshop facilities, and lack of integrated information management systems. Sayer et al. [17] and Birge et al. [16], in relation to the AM cycle for landscape interventions, suggested a defined objective, site-specific intervention, and trade-off analysis and community participation as important ingredients for sustainable landscape management. These gaps in real-world case studies are reflected on a number of incidences ( Table 2) where these AM elements were mentioned in the reviewed literature as recommendations or gaps, though these are smaller compared to other indicators considered. Haileslassie et al. [68] argued that farm systems are heterogeneous; each farm system is unique in terms of its livelihood assets (including both biophysical and socioeconomic resources) and agricultural practices, and therefore unique in terms of sustainability. Considering this use of a single indicator such as crop yield or runoff to evaluate the transformation of landscape functions is inappropriate. Conceptually, heterogeneity applies also to scales of analysis. For example, a sediment yield assessment or runoff measurement at plot level will have different values and implications compared to watershed level-derived estimates because of sediment redistribution pathways. Therefore, a conclusion about impacts of change of agricultural landscape structure on landscape functions using single scale and incomplete indicators is misleading. When it comes to landscape functional indicators, the challenge is to develop spatially explicit monitoring and learning techniques involving suggestions by Sayer et al. [17] and Birge et al. [16] to support management ( Table 2). In sum, the implementation of SWC measures in the country (i) failed to match hotspot areas with technologies and farms [61][62][63]69]; (ii) erosion processes intensified after the LULC changes in the central, western and southern part of Ethiopia, which were covered with non-cultivated land during the initial "wake up" stage of the need for SWC measures [34,57]; (iii) lack of standard evaluation criteria and a comprehensive matrix addressing temporal, spatial, and social dimensions of SWC [69]; (iv) failure to counterbalance the impacts of historical LULC changes; and (v) often failure to engage farmers and thus, did not manage to increase adoption at larger scale [27]. The major challenges in relation to indicators of landscape functions (persisting erosion, increasing runoff and siltation of water harvesting structures, and downstream water bodies and infrastructure) are related to these failures. The above summaries are also related to lack of structured decision-making and learning processes in development interventions. The traditional trial-and-error approach, which is often a dominant practice in the current land and water management in Ethiopia, can accomplish learning from what went wrong in the past, considering a feedback loop, and adapting when necessary to avoid similar mistakes. Such an approach is not suitable for ecological systems for two reasons. First, slow feedback may mask long-term undesirable management outcomes. Second, ecosystems do not recalibrate to some predictable, stable state following failure. Instead, management mistakes can be persistent and costly [16], thus AM that (unlike the traditional trial and error) emphasizes learning while doing could be appropriate [64]. Impacts on Carbon Sequestration and Biodiversity Studies demonstrated that the various land and water management measures implemented in the country (change in landscape structure) contributed to the restoration of both below-and above-ground carbon storage. For example, Woolf et al. [70] estimated the mean carbon benefit (both above-and below-ground carbon) across the Productive Safety Net Program (PSNP) sites to about 5.7 tons of CO 2 e per ha per year. Extrapolating these results to the whole intervention area of the PSNP (600,000 ha) would imply that a total carbon benefit in the order of 3.4 million t CO 2 e per year has already been achieved by PSNP. Similarly, studies by Mekuria et al. [53,54] and Anwar et al. [71] demonstrated that land and water management practices (mainly exclosure) are effective at increasing ecosystem carbon stocks (ECS). For example, a study conducted by Mekuria et al. [53,54] in Tigray, the most northern part of Ethiopia, showed that differences in ECS between exclosures and grazing lands varied between 29 (±4.9) and 61 (±6.7) C t ha −1 and increased with exclosure duration. A study in northwestern Ethiopia [56] showed considerable increases in above-ground carbon (ranged from 0.6 to 4.2 t C ha -1 ) following the establishment of exclosures. Anwar et al. [71] showed that over a period of six years, above-ground biomass increased by 56 t ha -1 (or 81%) at the watershed scale because of the conversion of communal grazing land to exclosure. SWC measures were also effective in improving biodiversity. For example, studies [53,70] detected higher plant species richness and diversity in exclosures compared to communal grazing lands. Furthermore, differences in plant species richness and diversity compared to adjacent communal grazing lands increased with age of the exclosure. In the AM approach, spatially explicit analyses and continuous monitoring are important to inform local authorities about the gains and losses of investments and continuously adapt the management approach as necessary. For farm households contributing free labor, the establishment of exclosures for carbon sequestration may not make sense. Smallholder farmers are often risk-averse and focused on short-term gains. The bottom line, however, is how such community segments can be incentivized better in light of superior objectives, and how to understand and minimize all trade-offs arising from such interventions to enable wider adoption of the practice to support sustainable landscape transformation. For example, exclosures require land, labor, and water-but all these resources have opportunity costs. The point then is whether the benefits from carbon sequestration and restoration of biodiversity can exceed the opportunity costs of these inputs today and tomorrow, and how can farmers who bear the costs (labor, land loss, etc.) be compensated-in real money-paid by polluters who pay for carbon emissions? These are important issues that research and AM need to explore and make interventions that are context-specific and sustainable. This is demonstrated by the proportion of highest learning elements of AM from the total number of incidents AM elements mentioned as recommendations or gaps in the carbon sequestration related indicators (60%, figure not indicated in Table 2). Conclusions This review synthesized evidences of transformed structural (e.g., LULC, small water harvesting structures, exclosure, and livelihood transformation) and functional (e.g., production, runoff, sediment, biodiversity, and carbon sequestration) elements of the Ethiopian agricultural landscape, and identified gaps in the application of an AM cycle in planning and implementation of SWC measures. Despite numerous environmental and socioeconomic benefits of land and water management interventions, application of elements of AM cycles have emerged as important gaps in Ethiopia, but with a different magnitude ( Table 2). The most frequently mentioned elements of AM in the reviewed literature were lack of contextualizing interventions (20%) followed by explicit trade-offs (17%) and negotiation and feedback (13%). Although elements of AM are not mutually exclusive, the above trends show where the future focus of investment of land and water management should be. Overall, these gaps can be summarized as follows: (i) insufficient knowledge management efforts, particularly in relation to evidence-based contextualization of interventions, and continuous post-intervention monitoring of bio-physical and socioeconomic changes, (ii) lack of evidence of trade-off analysis and implementation of management options, (iii) inadequacy of local community engagement at the onset of interventions and provision of feedback mechanisms as implementation evolves, and (iv) information gaps on the outcome and impact estimation and how land and water management intervention transform local community livelihoods across space and time. Against this background, we conclude that planning and implementation of interventions to transform agricultural landscapes for improved ecosystem services needs structured decision support and continuous learning tools, which are currently limited in Ethiopia and many SSA. Given the many uncertainties we must deal with (e.g., impacts of climate change), land and water management intensification should not follow a business-as-usual approach. Addressing the identified gaps will help to attain sustainability of land and water management interventions and to ensure that the people's needs are met now and in the future. For this to happen, SSA and specifically Ethiopia, the focus of this review, need to follow an AM approach in landscape transformation. While application of AM is a useful tool to guide natural resources decision-making, it is not an end by itself. It is rather a means for informed decisions. The success in SSA depends, among other things, on how AM is tailored to context (social, environmental, and economic) and the ability to learn from the knowledge generated and apply the learning as implementation evolves and the measure of its success should be how well it helps meet sustainable landscape transformation goals across the scale.
v3-fos-license
2020-12-03T09:03:30.478Z
2020-10-04T00:00:00.000
234656492
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://ijhhsfimaweb.info/index.php/IJHHS/article/download/254/254", "pdf_hash": "6208cd64b457b5725cd466c6b7576955bbaf0e01", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1255", "s2fieldsofstudy": [ "Political Science" ], "sha1": "7ca972b24dbbcbba5a91d216103ea7c31c4dad53", "year": 2020 }
pes2o/s2orc
Safety and Security of Sexual-reproductive Health and Gender-based Violence among Rohingya Refugee Women in Bangladesh Rohingya refugee women and girls are from a vulnerable society taking shelter in Bangladesh for humanitarian assistance following the serious human rights violations in Myanmar. They are facing a number of challenges such as insecurity, violence, very limited freedom of movement or ability to speak up and influence decisions in their communities. They are most vulnerable to exploitation due to inadequate basic living facilities in the camp causing them to be physically or sexually abused, forced prostitution and human trafficking. Gender-based violence, abandonment by their husbands in the camps, early marriage, teenage pregnancies including lack of safer pregnancy and childbirth are all important issues and challenges faced by them. Access to basic amenities and educational opportunities with special attention about sexual and reproductive health including issues such as gender equality, relationships and conflict management and adequate community health care can help the Rohingya women to overcome the situation. Actually, the word “Rohingya” derived from the people who exist in from the British rule of the medieval period in the current “Rakhine” state, formerly known as “Roshang” later turned into “Rohang” due to colloquial usage. Although officially Myanmar is not using the term “Rohingya” as this might potentially endorse their indigenous origin, an international involvement is obligatory to find a solution for sustainable return of Rohingya refugees to Myanmar. Introduction Rohingya are the Muslim community living in Rakhine state of Western Myanmar who are officially stateless and disowned by the Myanmar government as the 1982 Citizenship Law took away their Myanmar citizenship and forced them to face severe humanitarian crisis. They differ in language, appearance, and religion from Myanmar's dominant Buddhist population 1,2 . They are forced to take migration to the neighboring countries of Thailand, Bangladesh and Malaysia due to religious and geographical factors 3 . In Bangladesh, this forced migration of the Rohingya population took place as early as 1978, and again in 1991-1992 due to the torture by Myanmar security forces including rape, arrests, and executions. This forced migration causes total displacement of almost 250,000 people during these two occasions 4 . Those who left in Myanmar, faced anti-Muslim riots in 2001 and later violence between Rohingya Muslims and Arakanese Buddhists in Rakhine State causes deaths and demolition of their property leading to another mass displacement in 2012. Most recent displacement occurred in October 2016 when Rohingya armed group namely the Arakan Rohingya Salvation Army (ARSA) attacked and killed nine police officers causing the Myanmar military to respond with a major security operation which is marked as human rights violations 4 . After that incidence, the largest influx of Rohingya refugee population of over 900,000 fled to Bangladesh for humanitarian assistance 5,6 . One report states that, till April 2019 approximately total 911,359 Rohingya refugees took shelter in Cox's Bazar including 34,172 refugees who were registered before 31st August 2017 7 .The vast majority of the Rohingya population are living in total 34 camps and the largest camp is the Kutupalong-Balukhali Expansion Site that hosts approximately 626,500 Rohingya 8, 9 . Among the total number of Rohingya, approximately 52% are women and girls 10 . In another report by UNICEF, mentioned that 67% of the refugees are female which accounts to 335,670 female refugees 11 . Many of the women are alone with their children and bearing the responsibility of their family 12 . It is estimated that 16% of the total number of Rohingya refugee households in Bangladesh are female headed 13 .Bangladesh Government has provided shelter for them and also other basic requirements such as food, clean water, health facility and other necessary things with the help from the humanitarian world and the people of Bangladesh. In Myanmar, they have been exposed to persistent persecution and conflict. Comparing to that situation, they feel that their life is better in Bangladesh than that in Myanmar. However, the displaced Rohingya refugees still faces numerous challenges in the camp life. The challenges faced by the women and girls are different from that of the male population. They face challenges concerning the safety and security issues, sexual and reproductive health (SRH) issues including safer pregnancy and childbirth, marriage practices, gender-based violence (GBV), and other related issues. The objective of this article is to review the issues, challenges faced by the Rohingya women and girls in their camp accommodation and the steps taken to improve the situations. Rohingya Women The Rohingya are a conservative community, with social and cultural norms that restricts women's empowerment. Women generally experience barriers to freedom of movement when they reach puberty. They are confined to the home and restricted to come to the public places. They also have restricted access to and control over resources 14 . Thus they faced double restrictions in their homeland due to limitations imposed by their government and military during their stay in Myanmar and also from the men in their community. Previous study on 3000 Rohingya refugees showed that 94% women did not play any role about their consent in their marriage, and that 45% were married as children. Again, 95% of them reported that the main role of women is cooking; 53% women believed that they should not be allowed to leave the house and 42% of them reported spending an average of 21-24 hours/ day inside their house. Thus their mobility, leadership skill, decision-making capability are all hindered that can have a negative impact on their lives 10 . Being grown up in such a situation and displaced from country of origin to a refugee status in different country, their situation becomes terribly challenging. Issues and challenges related to safety and security The Rohingya refugee women faces challenges such as insecurity, violence, very limited freedom of movement or ability to speak up and influence decisions in their communities 12 . Families headed by female and elderly persons with no male relatives suffered greater vulnerability than those families with adult males. Because of the economic compulsion in the new camp situation, they cannot be confined to the house. They need to accomplish a huge number of difficult tasks for their families such as cooking, collecting water, monsoonproofing their huts, fixing roofs, breastfeeding, chopping and carrying firewood, and collecting rations etc 15 .These single women or single mothers in addition to cope alone in the refugee camps, experiences access barriers to humanitarian relief services for food, shelter and other issues 10 . They are reported of being harassed while performing essential tasks, such as collecting water, using the latrine or feelings of 'shame' around using water, sanitation and hygiene (WASH) facilities 16,17 . Difference in spoken language prevent them from getting specific healthcare needs and other humanitarian assistance 18 .The barriers from leaving their shelters also includes the cultural respect for the practice of purdah, fears around safety, the burden of care work for the family, lack of public lighting, lack of appropriate clothing and lack of women-only spaces 17 . The safety issue is so significant that they are under a constant risk of rape while leaving their shelter for their daily work and especially at night while using the latrines 19 . Sexual and reproductive health (SRH) issues Rohingya women and girls are most vulnerable to exploitation in terms of SRH issues due to inadequate basic living facilities in the camp. They are reported to be physically or sexually abused, or even faced forced prostitution 20 . In the camp, lack of income generating activities leads the women and adolescent girls to increased risk of exploitation in the form of trafficking for commercial sexual purposes, forced marriages and forced labor 21 . Child marriage is a preference for them in absence of well-defined laws regarding minimum age at marriage and legal documentation processes for marriage registration in the camps 22 .These are considered as a negative coping mechanism to ease economic and food insecurity 20,21 . Even they are reported to take part in illegal drug business 21 which together with the illegal sex increases the risk of exposure to sexually transmitted infection among the Rohingya women and young girls 18 . They are most vulnerable to human trafficking both inside and outside the country following the pattern of trafficking globally 20 . There are criminal syndicates has been grown up to exploit the women in the refugee camps and trafficking them to Malaysia through unsafe boats. They are actively luring Rohingya women to go to Malaysia from various camps in Bangladesh. They convince women to go to Malaysia to marry Rohingya men in Malaysia and thus to get rid of the poverty they are suffering from in the camp. They even ensure them of getting some kind of work which in turn can help their families too 23 . Safer pregnancy and childbirth are of great concern among the Rohingya refugees, although, maternal health care facility is available in the camp 24 . It is reported that in the camp's overstretched health-care facilities, 24,000 pregnant and lactating Rohingya women require maternal health-care support 10 . The social, cultural, and historical context influences their views on childbearing, family planning, and contraception. They do not willingly go to the health care services for childbirth due to ignorance or prevented by their husband or mother in law who are the important decision maker regarding the attendance to the health care facility. Therefore, home delivery using traditional birth attendants is preferable to them even though currently they are residing in the camp environment. Family planning or use of contraception is not practiced by them due to the preference for large family and also for the religious beliefs and stigma and misconceptions about contraception 24 . Their ignorance about the HIV/AIDS or other sexually transmitted infections due to lack of education and health services opportunity that they received while residing in Myanmar makes them more susceptible to such diseases 22 . Currently although sexual and reproductive health services are provided at the refugee camps giving priority to the lifesaving activities, still it is a major issue for access to essential comprehensive reproductive, maternal and newborn health service due to the societal norms and cultures as well 1 . It is very important to teach them about sexual and reproductive health and well-being and issues such as gender equality, pubertal changes and hygiene, relationships and conflict management 24 . Gender-based violence Gender-based violence (GBV) is the human rights violation protected by international human rights conventions 25 . Both women and men experience GBV but in a broad term, it identifies the violence against women and girls whether it is committed through sexual violence or through other means. Gender-based violence includes domestic violence, sexual harassment, sexual violence and rape 26 . Most of the cases of GBV are sexual, and GBV the victims are female, while the perpetrators are male 27 . It is assumed that they have been exposed to sexual violence at all three stages which are during their stay in Rakhine State, during flight and during their refugee status in the refugee camps in Bangladesh 27 . Rohingya women and girls' experiences on GBV including rape and sexual assault by the Myanmar army are for a long decade although, many victims do not like to report their ordeal out of concerns over safety, confidentiality, shame and stigma 28 . Many of the survivors of the serious human rights violations in Myanmar reported sexual violence such as sexual assault, rape and gang rape 29 . There was no immediate service provided to the rape survivors in Myanmar such as access to urgent interventions, like emergency contraception (120 hours) and prophylaxis against HIV infection (72 hours). The Myanmar government obstructs the humanitarian access to Rakhine State 30 . Many rape cases have resulted in pregnancy leading to unsafe abortion 31 . It is estimated that at least 2.6% of those assaulted by sexual violence succumbed to death 32 . The survivors are susceptible to contract sexually transmitted diseases 18 . The United Nations Population Fund (UNFPA) reported that since late August 2017 they have assisted 3500 sexually assaulted Rohingya women 16 . Based on 6% reporting rate of the rape victims to the UNFPA, it is estimated around 58,300 Rohingya women and girls experienced sexual violence 11 . In Bangladesh, with the current refugee status, Rohingya women are more vulnerable to domestic and community violence due to breakdown in social norms, disruption of the family structure 27 . UNCHR (2003) identifies that unequal power relationships as an underlying reason for GBV in the refugee camps in Cox Bazar area. The perpetrators who commit sexual assaults and harassment belongs either from their own community men or men from the host community such from the villagers or from camp itself including security personnel 27,33 . In a report by UNFPA in August 2018, over 10,000 incidents of GBV were reported since past one year 16 . Ignorance about the gender equality and rights makes them vulnerable towards facing domestic violence either from the husband or other family members or from community 24 . Rohingya women and girls enters a vicious cycle of social disease of GBV and leading a life without any access to the fundamental rights due to the consequences of lack of education, low income generation, lack of access to adequate healthcare, increased vulnerability to trafficking and forced prostitution, early marriage, teenage pregnancies and poor quality of living 34 . Abandonment by their husbands in the camps is a new emerging issue for the Rohingya women. The husbands are leaving their wife and even children either to escape beyond the Rohingya settlements into other parts of Bangladesh, or to get marry another woman in the camp. As no marriage registration is necessary, polygamy becomes a common trend in the refugee camp 15 . Save the Children,Action Against Hunger and HOPE hospital,has been involved in providing the health services in the camp 37,38,39,40 . Local NGO together the religious institutions provide the relief work and other basic services and international NGOs are delivering services such as relief, water, sanitation and hygiene (WASH), training, protection, shelter and other activities as per requirement of the government 35 . In order to strengthen SRH services to Rohingya refugees, Inter-Agency Working Group (IAWG) on Reproductive Health in Crisis are working to increase the provision and access to the Minimum Initial Service Package (MISP).The IAWG is currently controlled by a steering committee of 12 agencies from representatives of UN agencies, national and international non-governmental organizations (NGOs) and academic organizations. The MISP for Reproductive Health (RH) was first articulated in 1996 to provide basic reproductive health services during the first phase of an emergency. MISP includes the set of activities to prevent and manage the consequences of sexual violence, to reduce HIV transmission, to prevent excess maternal and newborn morbidity and mortality,and plan for comprehensive reproductive health services 4,[41][42][43] . UNFPA is a leading organization works through the United Nations, international and local nongovernmental organizations, and government agencies to provide sexual, reproductive and maternal health care services and also work for GBV. They had established 18 women friendly spaces in the refugee camps and two in the host community which are called as "shanti khana" or homes of peace by the Rohingya refugees. UNFPA supports safe spaces to provide medical, psychosocial, and legal services for survivors of GBV including post-rape care. They provide voluntary family planning at safe spaces, prenatal and delivery care through midwives, provide thousands of dignity kits which include menstrual hygiene supplies, soap, clothes, a flashlight and a whistle to help women move around more freely and confidently. Women and girls also able to get trainings on sewing and to earn money from their work 4 . UNFPA has arranged the training and skill center for midwives for the provision of care to the pregnant women 4 . In spite of the services provided by the Government and international organizations, still there are concerns about safety and security and other issues. There are many challenges such as high rate of home delivery, insufficient health clinics, access to voluntary contraception, HIV/STI treatment, and comprehensive post-rape care. A number of factors are responsible to act as a barrier such as limited mobility aggravated by monsoon season, lack of awareness, cultural barriers, and even opposition from family by husbands 43 .It is required to pay more attention towards the SRH of the large number of Rohingya women and girls 43 . One survey in March 2018 showed that 56.6% pregnant mother received no antenatal care, and 73.7% pregnant lady delivered at home without a certified birth attendant 44 .Another report by 2019 Joint Response Plan describes, only 43% of minimum service coverage has been achieved for urgently required GBV case management and psychosocial support (PSS)for children and adults till November 2018 8 . They reported, out of 34 camps, four are still not covered by essential minimum GBV services such as case management, access to psychosocial support health, clinical management of rape (CMR), legal counselling and safe spaces for women and girls while five camps only have 25-50% of GBV service 8 . The origin of the word "Rohingi' or "Rohingya" comes from the people who exist in, from the British rule of the medieval period in the current "Rakhine" state, formerly known as "Roshang" state later turned into "Rohang" due to colloquial usage; Myanmar even has an official policy of not using the term "Rohingya", as this might potentially endorse the indigenous origin of the community which currently make up the largest stateless community in the world 45 . The injustice imposed to the refugees by violating the human rights would create a deep wound in humanity. The world needs to understand that the refugees have similar needs with other people and equal rights to meet those needs 46 . An international involvement is necessary to find a solution for sustainable return of Rohingya refugees to Myanmar. Recommendation Rohingya women and girls have suffered many years of oppression and deprivation of basic services, should have a basic level of safety and security 30 .Policy makers are required to pay special attention to overcome the challenges and ensure optimal care and rehabilitation of them. A strong emphasis should be given on the basic need such as food, water, health, sanitation, shelter and protection and the special needs of women, girls, including safer pregnancy and childbirth; the prevention of and response to GBV; and education and life skills for children and youth who will, in all probability, become adults in the camps of Cox's Bazar 24 . It is necessary to find a solution for safe, dignified, voluntary and sustainable return of refugees to Myanmar. An international involvement and a long-term plan for their future rehabilitation and establishment in their own country is very much needed. Conclusions This review highlights the issues and challenges related to safety, sexual health, GBV among Rohingya refugee women in Bangladesh. It is essential to take steps for: i) adequate security by strengthening law enforcement; ii) access to basic amenities; iii) educational opportunities with special attention about sexual and reproductive health including issues such as gender equality, relationships and conflict management; iv) adequate community health care and SRH services with female trained staff having language proficiency; iv) skill based training programs such as sewing, embroidery, preparing handicrafts etc. that can allow them to earn to help their family. v) Finally, an international involvement and a long-term plan for their future rehabilitation and establishment in their own country is very much needed. Conflict of Interest: We declare that there is no financial or other conflict of interest. Funding: Not related. Ethical Approval Issue: Not applicable.
v3-fos-license
2018-04-03T00:26:58.848Z
2017-06-27T00:00:00.000
6750042
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/jdr/2017/6785852.pdf", "pdf_hash": "83f811f745ff0f978bd221623032a97f16b57abc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1256", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "62d5f5b994dac84c876551e5a3916c4d49c3ee94", "year": 2017 }
pes2o/s2orc
Aldose Reductase Inhibitor Protects against Hyperglycemic Stress by Activating Nrf2-Dependent Antioxidant Proteins We have shown earlier that pretreatment of cultured cells with aldose reductase (AR) inhibitors prevents hyperglycemia-induced mitogenic and proinflammatory responses. However, the effects of AR inhibitors on Nrf2-mediated anti-inflammatory responses have not been elucidated yet. We have investigated how AR inhibitor fidarestat protects high glucose- (HG-) induced cell viability changes by increasing the expression of Nrf2 and its dependent phase II antioxidant enzymes. Fidarestat pretreatment prevents HG (25 mM)-induced Thp1 monocyte viability. Further, treatment of Thp1 monocytes with fidarestat caused a time-dependent increase in the expression as well as the DNA-binding activity of Nrf2. In addition, fidarestat augmented the HG-induced Nrf2 expression and activity and also upregulated the expression of Nrf2-dependent proteins such as hemeoxygenase-1 (HO1) and NQO1 in Thp1 cells. Similarly, treatment with AR inhibitor also induced the expression of Nrf2 and HO1 in STZ-induced diabetic mice heart and kidney tissues. Further, AR inhibition increased the HG-induced expression of antioxidant enzymes such as SOD and catalase and activation of AMPK-α1 in Thp1 cells. Our results thus suggest that pretreatment with AR inhibitor prepares the monocytes against hyperglycemic stress by overexpressing the Nrf2-dependent antioxidative proteins. Introduction Hyperglycemia is a major contributor to inflammation, apoptosis, profound vasodilation, tissue damage, and dysfunction in patients with diabetes mellitus [1]. The cytotoxicity of hyperglycemia is mediated by the increase in reactive oxygen species (ROS) which activate NF-κB and AP1 that results in the transcription of inflammatory cytokines [2]. Our recent studies indicate that inhibition of the polyol pathway enzyme aldose reductase (AR) prevents cytokine-and hyperglycemia-induced increase in inflammatory markers in macrophages, vascular cells, and diabetic mice [3,4] by preventing the activation of NF-κBand AP1-induced proinflammatory signals [3,5]. We have shown that preincubation with AR inhibitor prevents hyperglycemia-and cytokine-induced proliferation of vascular cells and apoptosis of macrophages [6][7][8]. While these studies indicate that inhibition of AR could prevent oxidative stress-induced inflammatory response, the mechanism(s) by which inhibition of AR prepares the cells against oxidative stress is not known. Previous studies indicate that transcription factor nuclear factor-erythroid-2-related factor 2 (Nrf2) regulates a battery of cytoprotective genes which maintain cellular redox homeostasis. Nrf2 binds to the antioxidant response element (ARE) and transcriptionally regulates the gene expression of several antioxidant and phase II detoxifying enzymes including hemeoxygenase 1 (HO1), NAD(P)H-quinone dehydrogenase 1 (NQO1), γ-glutamylcysteine synthetase (GCS), glutathione S-transferases (GSTs), and AR [9]. Generally, under nonstress conditions, Nrf2 complexes with an adaptor protein, Keap1, which is a regulator of the proteasomal degradation of Nrf2. However, under stress conditions, Nrf2 dissociates from Keap1 and translocates to the nucleus and transcribes the genes responsible for defense against stress. Thus, it has been well established that activation of the Nrf2 pathway in response to oxidative stress protects the cells and tissues from oxidative injury [10]. Although our previous studies indicate that AR inhibition prevents hyperglycemia-induced NF-κB-dependent inflammatory signals, it is not known how AR inhibition increases the resistance of cells to withstand oxidative stress stimuli initiated by hyperglycemia. Therefore, in this study, we examined our hypothesis that AR inhibition promotes the activation of Nrf2-mediated cytoprotective pathways that protect cells against hyperglycemic stress. Our results suggest that AR inhibitor fidarestat increases the expression as well as the DNA-binding activity of Nrf2 in Thp1 monocytes. In addition, fidarestat increased the expression of Nrf2 downstream target proteins such as HO1 and NQO1 in Thp1 cells and heart and kidney tissues of STZ-induced diabetic mice. AR inhibitor also prevented the expression of antioxidant enzymes such as SOD and catalase. Collectively, our results demonstrate that AR inhibition protects the cells against hyperglycemia-induced changes in the cell viability by activating Nrf2/HO1-mediated antioxidative pathway, which could also account for the anti-inflammatory effects of AR inhibitors in diabetes. Cell Culture and Treatment. Human Thp1 monocytic cells were obtained from the American Type Culture Collection (ATCC) and cultured in RPMI-1640 medium supplemented with 10% FBS and penicillin/streptomycin at 37°C in a humidified atmosphere of 5% CO 2 . Prior to treatment, cells were serum starved in the respective medium containing 0.1% FBS ± fidarestat (10 μM) for 14 h and further stimulated with high glucose (25 mM; 19.5 mM glucose was added to 5.5 mM glucose containing media) for different time intervals. 2.3. Cell Viability. Cell viability was determined using a standard MTT assay [11]. Briefly, Thp1 cells were growth arrested in 0.1% FBS containing RPMI medium. Cells were preincubated with fidarestat (10 μM) for overnight (14 h) at37°C followed by incubation with HG (25 mM) for another 48 h. At the end of incubation period, cells were incubated with 10 μl MTT reagent (5 mg/ml) for 4 h at 37°C. The formazan crystals formed by the viable cells were solubilized by the addition of 100 μl DMSO. Absorbance was measured at 570 nm using a microplate reader. Cell viability was also examined by counting the live and dead cells using a hemocytometer [6]. AR activity (mU/mg protein) was measured spectrophotometrically using glyceraldehyde as a substrate [6]. Nrf2 DNA binding activity was determined by Nrf2 transcription factor assay kit as per the manufacturer's instructions (Cayman Chemical). 2.4. Immunoblot Analysis. Nuclear and cytoplasmic proteins from the treated cells were isolated using a nuclear extraction kit (Cayman Chemicals). Protein concentration in the extracts was measured with Bradford reagent (Bio-Rad). Equal amount of proteins were subjected to 12% SDS-PAGE electrophoresis followed by Western blot analysis using specific antibodies against Nrf2, Keap1, HO1, NQO1, AMPK, histone H3, and GAPDH. The antigen-antibody complexes were detected by enhanced Super Signal West Pico Chemiluminescent Substrate (Thermo Scientific). Membranes were stripped with Restore TM PLUS stripping buffer (Thermo Scientific) and used for reprobing with other antibodies or loading controls. 2.5. Ablation of Nrf2 by siRNA. Thp1 cells cultured in RPMI 1640 medium containing 10% FBS were incubated with Nrf2-siRNA (120 nM) or siRNA negative control with HiPerFect Transfection Reagent as per the manufacturer's instructions (Qiagen, USA). The cells were incubated in a humidified CO 2 incubator for 48 h at 37°C. Silencing of Nrf2 was determined by Western blotting. 2.6. Quantitative RT-PCR Analysis of HO1 mRNA. Total RNA was isolated from the treated cells using TRIzol reagent and was quantified by using a nanodrop spectrophotometer (NanoDrop Technologies). TaqMan reverse transcription reagents kit was used for the synthesis of cDNA from total RNA (Life Technologies). Q-PCR amplifications (performed in triplicate) were performed by using 1 μl of cDNA using the iTaq Universal SYBR Green Supermix (Bio-Rad). Housekeeping gene GAPDH was used as a normalizer. ABI Prism 7500 Sequence detection system using forward: 5 ′ -C GGGCCAGCAACAAAGTG-3 ′ , and reverse: 5 ′ -CCAGAA AGCTGAGTGTAAGGACC-3 ′ was used for qPCR analysis of HO1 gene. Determination of HO1 and Nrf2 in STZ-Induced Diabetic Mice. Seven-week-old C57BL/6 male mice were purchased from Envigo. Diabetes was induced in mice by injecting a single dose of streptozotocin (STZ; 165 mg/kg, i.p.) and blood glucose levels were measured by a glucometer (True Metrix). The mice with blood glucose levels >400 mg/dl were selected and randomly divided into diabetic and diabetic + fidarestat groups. Fidarestat (10 mg/kg/day, i.p.) was administered to diabetic mice, and the animals were euthanized on day 3. Statistical Analysis. Data are presented as mean ± SD. The p values were determined using the unpaired Student's t-test (GraphPad Prism software) and a p value of <0.05 considered as statistically significant. AR Inhibition Prevents HG-Induced Thp1 Cells Viability. The effect of AR inhibition on HG-induced Thp1 cells viability was examined by measuring the live and dead cell counts as well as MTT absorbance. The data shown in the Figure 1(a) indicates that HG treatment of Thp1 cells decreased the number of live cells and increased the number of dead cells indicating that HG decreases Thp1 cell viability. However, pretreatment of Thp1 cells with AR inhibitor prevented the HG-induced decrease in the Thp1 cell viability. Similar results were observed when we measured the cell viability by MTT assay (Figure 1(b)). The data shown in Figure 1(c) also indicates that AR activity was significantly increased in the HG-treated Thp1 cells and fidarestat prevented it. These results thus suggest that AR inhibition prevents HG-induced decrease in the cell viability of Thp1 cells. AR Inhibitor Increases the Expression of Nrf2. To examine how pretreatment of cells with AR inhibitor prevents HG-induced decrease in Thp1 cell viability, we examined the effect of AR inhibitor on the expression of Nrf2. Pretreatment of Thp1 cells with fidarestat alone or HG alone induced Nrf2 expression in a time-dependent manner. Further, preincubation of cells with fidarestat followed by incubation with HG significantly augmented the HG-induced increase in the expression of Nrf2 (Figure 2(a)). Similarly, treatment of Thp1 cells with HG decreased the expression of Keap1, a negative regulator of Nrf2 and preincubation with fidarestat, followed by HG decreased the expression of the Keap1 protein ( Figure 2(a)). We next examined the effect of AR inhibitor on Nrf2 DNA binding activity in Thp1 cells. Nrf2 transcriptional activity increased in the fidarestat-treated Thp1 cells in a time-dependent manner as compared to that in control cells (Figure 2(b)). Further, fidarestat augmented the HG-induced Nrf2 transcriptional activity in Thp1 cells. These results thus suggest that preincubation of cells with AR inhibitor prepares the cells against oxidative insult by inducing the expression of Nrf2. AR Inhibition Increases the Antioxidative Protein Expressions in Thp1 Cells. We next examined the effect of AR inhibitor on the expression of various Nrf2dependent antioxidative proteins. Results shown in Figure 3(a) indicate that fidarestat alone or HG alone increased the levels of antioxidant proteins such as HO1 and NQO1 in Thp1 cells. Further, pretreatment with fidarestat followed by HG synergistically increased the HO1 and NQO1 protein expressions in Thp1 cells (Figure 3(a)). Similarly, AR inhibitor also increased the levels of HO1 in Thp1 cell lysates (Figure 3(b)). Furthermore, mRNA expression of HO1 increased significantly in cells treated with HG in the presence of fidarestat as compared to HG-or fidarestat-treated cells (Figure 3(c)). Furthermore, AR inhibitor also significantly increased the HG-induced SOD as well as catalase activities in Thp1 cells (Figures 3(d) and 3(e)). Thus, our results indicate that pretreatment of Thp1 cells with fidarestat enhances the antioxidant status of the cells as a defense against hyperglycemic stress. Effect of AR Inhibitor on HG-Induced Cell Viability in Nrf-2-Knockdown Thp1 Cells. To examine the effect of Nrf2 on AR-regulated cell growth, we determined the Thp1 cell viability in the Nrf2-siRNA knockdown cells in the absence and presence of fidarestat. Incubation of Nrf2 knockdown cells with HG (25 mM) significantly increased cell death of Thp1 cells when compared to that of control cells (Figure 4). Further, fidarestat prevented the HG-induced Thp1 cell death in control siRNA-transfected cells but not in the Nrf2-siRNAtransfected cells, suggesting that fidarestat prevents Thp1 cell viability by increasing the Nrf2 expression. AR Inhibitor Increases the Expression of Nrf2 and HO1 in Mouse Diabetic Heart and Kidney Tissues. We next examined the effect of fidarestat on the expression of antioxidant proteins, HO1, and Nrf2 in STZ-induced diabetic mouse heart and kidney tissues. Similar to data shown in the Thp1 cells, AR-inhibitor (fidarestat) also augmented the STZ-induced increase in the expression of Nrf2, HO1, and NQO1 in the heart and kidney homogenates of mice ( Figure 5). AR Inhibitor Regulates HG-Induced Phosphorylation of AMPK-α1 in Thp1 Cells. Since AMPK-α1 activation has been shown to activate Nrf2 signals, we next investigated the effect of fidarestat on HG-induced AMPK-α1 activation in Thp1 cells. Our results shown in Figure 6(a) indicate that fidarestat increased the HG-induced phosphorylation of AMPK-α1 in Thp1 cells. Further, to investigate the effect of Nrf2 Subsequently, the cells were also pretreated with fidarestat for overnight followed by incubation with HG for 30, 60, 120, and 240 minutes. Equal amounts of nuclear and cytosolic proteins were subjected to Western blot analysis for the expression of Nrf2 and Keap1, respectively. Histone H3 and GAPDH served as loading controls for nuclear and cytosolic protein extract, respectively. A representative blot from three independent analyses is shown (a). Nrf2 transcription factor assay using the nuclear protein of treated Thp1 cells was carried out using an ELISA kit (b). Data represent mean ± SD (n = 5). * p < 0 05 when compared with control, and #p < 0 05 when compared with the HG-treated group. on AMPK-α1 activation, Nrf2-siRNA-transfected Thp1 cells were stimulated with HG ± fidarestat and examined AMPK-α1 activation. The results shown in Figure 6 indicate that HG-induced increase in the phosphorylation of Nrf2 in the absence of fidarestat. However, pretreatment of fidarestat to the Nrf2 knockdown cells did not show any Fold changes were determined after normalizing with loading control GAPDH. A representative blot from three independent analyses is shown (a). The HO1 levels in the cell lysates were determined by ELISA (b). The mRNA levels of the HO1 gene in Thp1 cells were determined by RT-PCR as described in Section 2 (c). SOD and catalase activities were analyzed in Thp1 cell lysates by using specific kits as per the manufacturer's instructions (d and e). Data represent mean ± SD (n = 5). * p < 0 01 versus control; #p < 0 05 when compared with the HG-treated group or the Fidarestat alone-treated group. significant differences in the phosphorylation of AMPK-α1. These results suggest that by regulating the Nrf2-mediated AMPK-α1, AR inhibitor could modulate hyperglycemic stress in Thp1 cells. Discussion We have shown previously that pretreatment with AR inhibitors prevents cytokine-, chemokine-, HG-, and LPS-induced Figure 5: AR inhibitor induces the expression of Nrf2 and its dependent antioxidant enzymes in STZ-induced diabetic mice tissues. STZinduced diabetic mice were treated without or with fidarestat as described in Section 2. An equal amount of proteins from the heart and kidney homogenates were subjected to Western blot analysis using specific antibodies against Nrf2, HO1, and NQO1. Fold changes were determined after normalizing with loading control, GAPDH. A representative blot from three independent analyses is shown. * p < 0 05 when compared with control, and #p < 0 05 when compared with the HG-treated group. inflammatory response mediated by NF-κB in various cellular studies [12][13][14][15]. Further, AR inhibitors prevent NF-κB-mediated proinflammatory pathways in vitro and in vivo models of hyperglycemia [12,16]. However, it is not clear how pretreatment with AR inhibitors prepares cells against oxidative stress and activates Nrf2-mediated antiinflammatory pathways. In this study, we have demonstrated that fidarestat induces Nrf2-mediated antioxidative and antiinflammatory pathways in Thp1 monocytic cells. Further, fidarestat also augmented the HG-induced expression of Nrf2 and its downstream targets. These results suggest that preincubation with AR inhibitors prepares the cells to defend against pathological effects of hyperglycemia. Nrf2 transcription factor regulates the expression of a number of cytoprotective antioxidative genes including SOD, catalase, GSTs, AR, HO1, NQO1, and so forth [9,10]. Several studies indicate that antioxidants overexpress the Nrf2 pathway as a defense mechanism against various oxidative insults including hyperglycemia [17,18]. Further, antioxidants such as flavonoids, triterpenoids, quinols, and tBHQ increase the activation of Nrf2 and protect against diabetes-induced nephropathy [19][20][21][22][23]. In addition, Nrf2 null mice are susceptible to the STZ-induced kidney injury [24]. Another study indicates that sulforaphane prevents metabolic dysfunction in hyperglycemia by increasing the expression of the Nrf2 pathway in human endothelial cells [25,26]. Similarly, curcumin has been shown to decrease insulin resistance, improve pancreatic cell function, and reduce hyperglycemia-induced inflammatory response and complications by activating the Nrf2 pathway [27][28][29]. These studies suggest the significance of Nrf2 activation in diabetes complications. Consistent with these studies, our current data also suggest that treatment of Thp1 cells with fidarestat augmented the HG-induced Nrf2 activity indicating that fidarestat increases antioxidative balance in the cells and thereby regulates HG-induced cell viability. Furthermore, we have evaluated the effect of fidarestat on HG-induced Thp1 cell viability in Nrf2-ablated cells. Our results demonstrate that AR inhibitor prevented the HG-induced cell death in control cells but not in Nrf2-ablated cells suggesting that Nrf2-mediated antioxidative pathways are required for the actions of AR inhibitor. Increased expression of Nrf2 leads to increased expression of enzymes linked to antioxidative (NQO1, GSTs, catalase, SOD) and anti-inflammatory (HO1, AR) functions that counteract against oxidative insults [10,30]. HO1 is an anti-inflammatory protein, and its overexpression has been shown to prevent various inflammatory complications including diabetes [31]. Specifically, HO1 has been shown to protect against HG-induced retinal endothelial cells damage and also prevent vascular inflammatory response in hyperglycemia [32][33][34]. In this study, our results indicate that fidarestat increases the expression of HO1 and augments HG-induced HO1 in Thp1 cells as well as kidney and heart tissues of STZ-induced diabetic mice suggesting that antiinflammatory activities of AR inhibition may be through activation of HO1. In addition, our studies also suggest that AR inhibitor increases the expression of SOD and catalase in Thp1 cells and HO1 and NQO1 proteins in STZ-induced diabetic mice heart and kidney tissues, indicating that AR inhibition prevents hyperglycemia-induced oxidative stress by overexpressing various antioxidative enzymes. Since activation of AMPK has been shown to be involved in the regulation of Nrf2 pathway [35], we have also examined the effect of AR inhibitor fidarestat on HG-induced changes in the phosphorylation of AMPK-α1 in Thp1 cells. Our results suggest that pretreatment with fidarestat stimulates phosphorylation of AMPK-α1 in HG-induced Thp1 cells. However, fidarestat pretreatment has no effect on the AMPK-α1 activation in Nrf2-ablated cells. In conclusion, we have shown that AR inhibitor fidarestat prevents HG-induced Thp1 cell death by induction of Nrf2 expression; DNA binding activity; and expression of HO1, NQO1, SOD, and catalase via activation of AMPK-α1. This suggests that AR inhibition prevents hyperglycemiainduced complications by upregulating anti-inflammatory SiRNA-transfected cells were pretreated with fidarestat for overnight followed by incubation with HG for 15, 30, 60, and 120 min. Equal amounts of cytosolic proteins were subjected to Western blot analysis using antibodies against total and phospho-AMPK-α1. Fold changes were determined after normalizing with total AMPK-α1. A representative blot from three independent analyses is shown. * p < 0 05 when compared with control, and #p < 0 05 when compared with the HG-treated group. Nrf2 pathway in addition to downregulating the proinflammatory NF-κB pathway. Conflicts of Interest Authors declare no conflict of interest.
v3-fos-license
2024-07-14T15:29:20.008Z
2024-07-10T00:00:00.000
271147389
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3389/fcimb.2024.1407180", "pdf_hash": "ba97920ae8bae0615d01ef509d3d89ee235516ec", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1259", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "sha1": "f90403c6a5bcad2e0dd0b7d097b4f15b96c0243a", "year": 2024 }
pes2o/s2orc
Mapping knowledge landscapes and research frontiers of gastrointestinal microbiota and bone metabolism: a text-mining study Introduction Extensive research efforts have been dedicated to elucidating the intricate pathways by which gastrointestinal microbiota and their metabolites exert influence on the processes of bone formation. Nonetheless, a notable gap exists in the literature concerning a bibliometric analysis of research trends at the nexus of gastrointestinal microbiota and bone metabolism. Methods To address this scholarly void, the present study employs a suite of bibliometric tools including online platforms, CiteSpace and VOSviewer to scrutinize the pertinent literature in the realm of gastrointestinal microbiota and bone metabolism. Results and discussion Examination of the temporal distribution of publications spanning from 2000 to 2023 reveals a discernible upward trajectory in research output, characterized by an average annual growth rate of 19.2%. Notably, China and the United States emerge as primary contributors. Predominant among contributing institutions are Emory University, Harvard University, and the University of California. Pacifici R from Emory University contributed the most research with 15 publications. In the realm of academic journals, Nutrients emerges as the foremost publisher, followed closely by Frontiers in Microbiology and PLOS One. And PLOS One attains the highest average citations of 32.48. Analysis of highly cited papers underscores a burgeoning interest in the therapeutic potential of probiotics or probiotic blends in modulating bone metabolism by augmenting host immune responses. Notably, significant research attention has coalesced around the therapeutic interventions of probiotics, particularly Lactobacillus reuteri, in osteoporosis, as well as the role of gastrointestinal microbiota in the etiology and progression of osteoarthritis. Keyword analysis reveals prevalent terms including gut microbiota, osteoporosis, bone density, probiotics, inflammation, SCFAs, metabolism, osteoarthritis, calcium absorption, obesity, double-blind, prebiotics, mechanisms, postmenopausal women, supplementation, risk factors, oxidative stress, and immune system. Future research endeavors warrant a nuanced exploration of topics such as inflammation, obesity, SCFAs, postmenopausal osteoporosis, skeletal muscle, oxidative stress, double-blind trials, and pathogenic mechanisms. In summary, this study presents a comprehensive bibliometric analysis of global research on the interplay between gastrointestinal microbiota and bone metabolism, offering valuable insights for scholars, particularly nascent researchers, embarking on analogous investigations within this domain. Introduction The human gastrointestinal tract harbors nearly 1000 trillion different species of microorganisms, forming a complex ecosystem known as the gastrointestinal microbiota (Schmidt et al., 2018).This microbial consortium constitutes the largest ecosystem within the human body and plays crucial roles in various aspects of human physiology, including gastrointestinal development, metabolic processes, nutrition, inflammatory responses, and immune system maturation (Blanton et al., 2016;Gentile and Weir, 2018;Ruff et al., 2020;Fan and Pedersen, 2021).While the impacts of the gastrointestinal microbiota in these areas have been extensively verified, scientific exploration into its additional effects on various systems and detailed mechanisms is actively ongoing. Bone metabolism encompasses the processes of bone formation and resorption, with osteoblasts and osteoclasts as central players.Osteoclasts are responsible for bone resorption, while osteoblasts are involved in bone formation.Dynamic equilibrium between these two cell types maintains skeletal homeostasis.In recent years, increasing evidence has indicated widespread involvement of the gut microbiota in signaling pathways related to bone metabolism, closely associated with the occurrence and progression of various bone metabolism disorders (Sańchez Romero et al., 2021;Seely et al., 2021;Basak et al., 2024;Zhang et al., 2024).Disruption of the gut microbiota has been shown to negatively impact bone health by impairing intestinal calcium absorption and modulating the balance of the Osteoprotegerin (OPG)/Receptor Activator for Nuclear Factor-k B Ligand (RANKL) pathway through regulation of multiple hormone levels, consequently reducing bone strength and quality (Han et al., 2023;Cai et al., 2024).Meanwhile, several probiotic strains, such as Lactobacillus and Bifidobacterium, have been demonstrated to exert significant anti-osteoporotic effects (de Sire et al., 2022;Li et al., 2024;Zhang et al., 2024).Although research on the therapeutic potential of gut microbiota in osteoporosis and osteoarthritis is still in its nascent stage, most scholars envision gut microbiota intervention as a promising strategy for the diagnosis and treatment of bone metabolic diseases in the future. In light of these findings, the study of the relationship between gut microbiota and bone metabolism has garnered widespread attention (Sańchez Romero et al., 2021;Seely et al., 2021;de Sire et al., 2022;Han et al., 2023;Basak et al., 2024;Cai et al., 2024;Li et al., 2024;Zhang et al., 2024;Zhang et al., 2024).Numerous studies have endeavored to elucidate the intricate mechanisms through which gut microbiota and their metabolites influence bone formation and remodeling, resulting in a plethora of published works.However, faced with the vast amount of literature, researchers often expend considerable time and effort to keep abreast of the latest developments and research dynamics.While some reviews and Meta-analysis offer summaries of key topics from specific perspectives, they may fall short in providing a comprehensive analysis of the overall research trends in the field (Wu et al., 2021a).Moreover, reviews may not furnish scholars, especially newcomers, with the latest information regarding nations, institutions, research clusters, and collaborative efforts.Consequently, owing to the aforementioned limitations of reviews, bibliometric analysis has emerged as a complementary approach, embraced by the biomedical community (Montazeri et al., 2023). Bibliometric analysis involves the qualitative and quantitative study of all knowledge carriers, such as literature and patents, using statistical, information science, and mathematical methods.This methodology serves as a vital tool for identifying active research teams and potential collaborators, delineating research hotspots, depicting overall research trends, and pinpointing important frontiers for future exploration in a given research field (Wang et al., 2023;Babar et al., 2024).In recent years, owing to the explosive growth of biomedical literature and the continuous development of various free bibliometric tools such as CiteSpace and VOSviewer software, bibliometric studies have garnered increasing attention in the biomedical domain (Aguilar Ramıŕez et al., 2022;Montazeri et al., 2023;Cheng et al., 2024).Taking the gut microbiota as an example, previous bibliometric studies have analyzed the relationship between gut microbiota and tumors (Zyoud et al., 2022;Wu et al., 2023), inflammatory diseases (Liu et al., 2023;Zhang et al., 2023), immune system disorders (Ni et al., 2022;Zhang et al., 2023), as well as traumatic diseases (Du et al., 2023;Huang et al., 2023).In the field of bone metabolism, several bibliometric analyses have investigated advancements and hotspots in research areas such as osteoporosis (Wu et al., 2021a;Temel et al., 2022), osteoarthritis (Yang et al., 2022;Xiong et al., 2024), and rheumatoid arthritis (Zhong et al., 2021;Radu et al., 2022), mapping the overall knowledge structure and citation networks of these fields.However, to the best of our knowledge, there is no one literature employing bibliometric methods to analyze research hotspots at the intersection of gastrointestinal microbiota and bone metabolism.To address this research gap, the present study employs various bibliometric tools to analyze relevant literature in this field.The primary objectives of this study are to: (1) analyze the overall trends in publications in the field from 2000 to 2023; (2) identify major contributors, including countries, institutions, and funding agencies; and (3) analyze the development and evolution trends of this domain. Data extraction Following the aforementioned search strategy, the selected literature was downloaded with "Full Record and Cited References" and exported in text or tab-separated format.Microsoft Excel 2019 was employed to count bibliometric metrics such as annual publication counts, citation frequencies, countries/ regions, institutions, authors, funding agencies, journals, keywords, and references.The "Citation Report" function in WoSCC was utilized to assess additional bibliometric indicators including total citations, average citations per item (ACI), and the H-index.Journal Impact Factors (IF) and quartile classifications (Q1, Q2, Q3, Q4) were sourced from the 2023 Journal Citation Reports (JCR, http:// clarivate.com/products/web-of-science).The H-index is defined as the number of articles (n) that have received at least n citations.Within the same discipline, JCR categorizes all journals into four quartiles based on their IF, where Q1 represents the top 25%, Q2 the subsequent 25-50%, and so forth.This study also addressed certain inherent deficiencies in the WoSCC database, consolidating and categorizing information from various regions into their respective countries.For instance, publications from England, Northern Ireland, Scotland, and Wales were aggregated under the United Kingdom; while those from mainland China and Taiwan were categorized under China. Data analysis Descriptive data analysis, chart plotting, and curve fitting were performed using Microsoft Excel 2019 and R (v4.1.0).Specifically, Microsoft Excel was employed to visualize trends in annual publication and citation counts, utilizing exponential, linear, logarithmic, or polynomial curve fitting methods, and selecting the optimal model based on the coefficient of determination (R²).The formula for calculating the annual growth rate of publications over time is as follows: Annual Growth Rate = [(Number of publications in 2023 ÷ Number of publications in 2000)^(1/23) -1] × 100 (Wu et al., 2023).Pearson correlation coefficient tests were conducted to evaluate the correlation between citation counts and publication counts, with a significance level set at P < 0.05 indicating statistical significance.Bibliometric and visualization analyses were conducted using three tools: an online bibliometric platform (https://bibliometric.com/),CiteSpace (v6.2R6) (Synnestvedt et al., 2005), andVOSviewer (v1.6.20) (van Eck andWaltman, 2010).CiteSpace, developed by Chen et al., is a widelyused Java-based open-source software for bibliometric analysis.In this study, the parameters for CiteSpace were configured as follows: (1) Time slicing: Each year was used as a time slice from 2000 to 2023; (2) Node types: Keywords and references; (3) Pruning options: Pathfinder, pruning the merged network; (4) Top Nperslice: Set to "Top Nperslice = 30" for author node type and "Top Nperslice = 50" for reference node type.VOSviewer, codeveloped by van Eck and Waltman, is another bibliometric software offering text mining capabilities to extract crucial parameters from extensive scientific publications.It provides three types of network maps: network visualization maps, overlay visualization maps, and density visualization maps.The parameters for VOSviewer were configured as follows: type of analysis (select one at a time, such as country/region, institution, journal or keywords), item thresholds (based on specific conditions), and VOSviewer thesaurus file (to merge different variants of keywords).Additionally, this study employed the online bibliometric platform to analyze national collaborations and trends in annual publications. Trend analysis of publications The quantity of publications across different periods directly reflects the developmental trends and transitions within a research domain (Wu et al., 2023).Among the final selection of 893 articles, comprising 689 original articles and 204 reviews, Figure 2 illustrates the annual distribution of publications in the field of gut microbiota and bone metabolism from 2000 to 2023.It is evident that the overall research output demonstrates a pronounced upward trajectory, with an average annual growth rate of 19.2%, notably surpassing a hundred articles per year in the last four years.In terms of citation frequency, the cumulative total citations for the 893 papers amount to 24658, with an average of 27.61 citations per article.A significant positive correlation exists between the number of publications and citation counts (r=0.94,P<0.001).These findings indicate a growing interest in the field of gut microbiota and bone metabolism in recent years.This result may be related to the advancements in high-throughput sequencing technologies such as metagenomics and metabolomics (Lau et al., 2023;Hiltzik et al., 2024;Huang et al., 2024).Genomic sequencing of gut microbiota unveils their diversity, abundance, and functional potential, while metabolomics enables scholars to explore the composition and variations of various metabolites within organisms, thereby elucidating the impact of microbiota on host metabolism and the host's regulation of microbial metabolism (Lau et al., 2023).Previous bibliometric analyses have demonstrated similar trends in research on gut microbiota in areas such as cancer (Zyoud et al., 2022;Wu et al., 2023), inflammatory diseases (Liu et al., 2023;Zhang et al., 2023), immune system disorders (Ni et al., 2022;Zhang et al., 2023), and traumatic injuries (Du et al., 2023;Huang et al., 2023).It is conceivable that with ongoing breakthroughs in omics technologies, our understanding of the relationship and mechanisms between gut microbiota and bone metabolism will be further enhanced, consequently leading to a continued increase in publications within this field in the foreseeable future. Analysis of journals and research directions For centuries, scientific publications have served as pivotal instruments for scholarly discourse across various domains.Publishing research findings in internationally peer-reviewed journals constitutes a crucial element in establishing effective scientific communication (Wu et al., 2021a).Analyzing major journals within a specific research field could aid researchers in promptly identifying the most suitable outlets and target audiences for their articles.Figure 3A summarizes the top10 journals with the highest publication volume.Among these journals, Nutrients boasts the highest number of relevant articles, followed by Frontiers in Microbiology and PLOS One.Most of these journals are classified as Q1 or Q2, with Nutrients having the highest IF at 5.9.Regarding the comparison of H-index and ACI, Nutrients achieves the highest Hindex of 17, while PLOS One attains the highest ACI of 32.48.Apart from publication volume, the influence of journals also hinges upon their citation frequency, which is a pivotal determinant of journal IF.In this study, co-citation analysis of journals was conducted using VOSviewer, as depicted in Figure 3B.Journals with at least 100 citation times or more were included in the visualization analysis, encompassing a total of 99 journals.Notably, the top 3 co-cited journals are Journal of Bone and Mineral Research, PLOS One, and Nature.The results suggest that these journals have published a considerable volume of high-quality research, garnering substantial attention from scholars in the field. Furthermore, WoSCC database facilitates the classification of research directions for each article, as illustrated in Figure 3C.The top 3 research directions with the highest publication volume are Nutrition Dietetics, Microbiology, and Endocrinology Metabolism.Overall, the findings of this study align with the thematic selection process undertaken. It is noteworthy that immunology has also emerged as one of the top 10 most scrutinized research directions.In recent years, a growing body of research has unveiled that gut microbiota and their metabolites not only modulate the secretion of endocrine hormones to influence bone remodeling mechanisms but also exert control over bone development by stimulating the immune system (Duque et al., 2011;Wagner and Johnson, 2012;Charles et al., 2015;Hao et al., 2019).For instance, Britton et al. (2014) discovered that L. reuteri could suppress the quantity of bone marrow CD4 + T lymphocytes, thereby directly inhibiting osteoclastogenesis.Dysbiosis of gut microbiota could promote Th17 cell differentiation, leading to the secretion of inflammatory factors such as IL-1, IL-17a, and tumor necrosis factor a (TNFa), which in turn promotes RANKL generation, inducing monocytes to differentiate into osteoclasts and accelerating bone loss (Luo et al., 2011).Kim et al. (2017) The annual publication trend in the research field of gut microbiota and bone metabolism. Additionally, as depicted in Figure 3D, employing CiteSpace for the overlay analysis of journals enables the visualization of the distribution patterns and citation trajectories of knowledge information across various disciplinary domains represented by citing and cited journals.The thickness of connecting lines signifies the frequency and intensity of information flow between journals.It could be observed that the overlay map of journals in this study exhibits five main information flows.The uppermost yellow flow represents research in the fields of environmental science, toxicology, and nutrition; the middle flow signifies molecular biology and genetics; while the lower flow represents health, nursing, and medical science.These flows converge towards the fields of molecular biology and immunology.Similarly, two additional green flows represent research findings from molecular biology, genetics, health, nursing, and medical science converging towards the fields of medicine, internal medicine, and clinical medicine.The theoretical and technical foundations of research on gut microbiota and bone metabolism originate from these information sources, while the trajectory of information flow depicts the developmental process and evolutionary direction of this field.The convergence points of information flow herald future research frontiers and trends.The results of the overlay graph of journals in Figure 3D indicate that future research hotspots in the field of gut microbiota and bone metabolism will focus on molecular biology, immunology, medicine, internal medicine, and clinical medicine. Analysis of national and institutional contributions In this corpus of 893 articles, contributions from 67 countries/ regions were identified.Notably, China and the United States emerged as the most prolific contributors, with 296 and 231 publications, respectively, collectively constituting 59% of the total articles.Evidently, both China and the United States stand as primary contributors in this field (Figure 4A).Previous studies have underscored the indispensable role of substantial financial support in the advancement of scientific research, highlighting the correlation between the output of scientific research across different countries and their respective Gross Domestic Product (GDP) (Wu et al., 2021b).Consistently, this investigation scrutinized the top 5 funding agencies supporting research in the field of gut microbiota and bone metabolism.The analysis reveals that the National Natural Science Foundation of China (NSFC), the National Institutes of Health (NIH), and the Department of Health and Human Services (HHS) of the United States are the leading sponsors of research endeavors in this domain (Figure 4E).This further underscores the correlation between research output from China and the United States and ample funding support.The H-index, defined as the number h of papers that have been cited h or more times, stands as a pivotal metric characterizing both the quality and quantity of research output (Hirsch, 2005).Consequently, this metric serves as a primary indicator for quantifying the productivity and impact of nations or institutions (Dasgupta and Taegtmeyer, 2023;Shah and Jawaid, 2023).According to the H-index, the United States leads with a score of 57, followed closely by China (34) and the United Kingdom ( 21).Nevertheless, it is worth noting that the H-index is intricately linked with temporal factors, with cumulative citation counts gradually increasing over time for a given study (Wu et al., 2021b).As depicted in Figure 4B Analysis of highly cited publications Highly cited literature analysis is a commonly utilized method in bibliometric research.While the debate persists regarding whether citation counts entirely represent the impact of a paper, it is generally acknowledged that citation frequency serves as the most objective indicator of research influence within the academic community (Wu et al., 2021a).Figure 5 illustrates a citation analysis network diagram encompassing literature in gut microbiota and bone metabolism field, with each node representing an article, where node size is proportional to its citation frequency.Table 1 summarizes the top 10 most cited articles.These studies were published between 2005 and 2019, with 50% of them garnering over 300 citations. Notably, Yan et al. (2016) achieved the highest citation count of 398 for their study published in PNAS.Their research revealed that colonization of the gastrointestinal microbiota from conventionally raised SPF mice into germ-free adult mice significantly increased bone formation and bone mass in the latter.Furthermore, they observed a notable elevation in serum IGF-1 levels in germ-free mice following microbial colonization, while antibiotic treatment lowered IGF-1 levels and suppressed bone formation.Supplementation of SCFAs to mice undergoing antibiotic treatment restored IGF-1 levels and bone mass to levels comparable to those of mice not receiving antibiotic treatment.Thus, the authors concluded that the gut microbiota could promote bone formation and growth through the induction of IGF-1.Ranked second among highly cited literature is the study by Li et al. (2016) published in J Clin Invest.This research found that in conventional mouse models, steroid deficiency increased intestinal mucosal barrier permeability, elevated Th17 cells, and upregulated expression of bone resorption factors such as TNFa, IL-17, and RANKL in bone marrow and small intestines, resulting in trabecular bone loss.Conversely, in germ-free mice, steroid deficiency failed to increase bone resorption factor production.Treatment of steroid-deficient mice with probiotic Lactobacillus rhamnosus or probiotic supplement VSL#3 significantly reduced intestinal permeability, suppressed bone marrow and intestinal inflammatory factor generation, and completely prevented bone loss.These experimental findings suggest that the gut microbiota serves as a central mediator in the trabecular bone loss induced by steroid deficiency and that methods to reduce intestinal permeability by probiotics may serve as effective therapeutic strategies for postmenopausal osteoporosis. The third-ranked article is a clinical study that discovered a significant increase in adolescent intestinal calcium absorption and enhanced bone mineralization with daily intake of prebiotic shortchain and long-chain fructo-oligosaccharides (Abrams et al., 2005).Ranking fourth in citation count is a study investigating whether Lactobacillus reuteri could mitigate bone loss in an ovariectomized (OVX) mouse model.Results indicated that Lactobacillus reuteri could reduce OVX-induced increases in bone marrow CD4+ T lymphocytes and suppress bone resorption.Thus, Lactobacillus reuteri treatment may represent an effective approach to treat postmenopausal bone loss (Britton et al., 2014).The fifth-ranked study found increased bone mass and decreased osteoclast numbers in germ-free mice compared to conventionally raised mice, and colonization of germ-free mice with normal gut microbiota restored bone mass, primarily attributed to changes in expression of inflammatory cytokines in mouse bone marrow (Sjögren et al., 2012).While seemingly contradictory to the findings of Yan et al. (2016), both studies concur that the gut microbiota is a critical regulator of mouse bone mass. Ranked sixth, a study identified that microbial metabolite butyrate stimulates bone formation through Treg cell-mediated Wnt10b expression (Tyagi et al., 2018) Citation analysis of documents. that probiotic treatment could alter skeletal immune status.Specifically, treatment with probiotics such as Lactobacillus paracasei reduced expression of inflammatory cytokines TNFa and IL-1b in cortical bone of OVX mice, increased OPG expression, and promoted Treg cell differentiation, thus preventing cortical bone loss (Ohlsson et al., 2014).The ninthranked study is a review summarizing the mutual influence between the gut microbiota and skeletal muscle health (Grosicki et al., 2018).Lastly, a study found that Lactobacillus reuteri could increase bone formation in male mice by reducing intestinal TNFa levels (McCabe et al., 2013). In conclusion, current research confirms the close relationship between the gut microbiota and bone/skeletal muscle metabolism, with probiotics and prebiotics regulating bone metabolism by improving host immune status.Moreover, microbiota-based therapies focusing on gut microbiota modulation have emerged as important avenues for treating bone metabolic diseases.In this therapeutic strategy, fecal microbiota transplantation and supplementation with probiotics or prebiotics are garnering significant attention and are extensively researched and discussed. Analysis of references Highly cited literature analysis could only investigate the total citation frequency of papers, but fails to capture the temporal dynamics of attention.The "burst detection" algorithm developed by Kleinberg et al (Kleinberg, 2003). is a commonly used bibliometric method capable of capturing sharp increases in citation attention during specific periods.In this study, we employ the burst detection algorithm to extract citations in the field of gut microbiota and bone metabolism research from 2000 to 2023. Figure 6 illustrates the top 25 bursting citations.In this figure, the blue lines represent time intervals, while the red lines indicate citation burst periods.Among these citations, the most prominent burst value is associated with a study by Brittond et al. (Britton et al., 2014) published in 2014, which garnered widespread attention from 2015 until 2019, marking a continuous burst period of five years after its publication.As previously mentioned, this study primarily reveals that Lactobacillus reuteri could mitigate bone marrow CD4 + T lymphocyte expansion induced by OVX, suppress osteoclastogenesis, and reduce bone loss.It is noteworthy that although the burst periods of most citations have concluded, several citations continue to experience ongoing bursts, indicating sustained interest in these research topics in recent years.For instance, Tyagi et al. (2018) confirmed that the microbial metabolite butyrate stimulates bone formation through Treg cell-mediated Wnt10b expression.Additionally, Zaiss et al. (2019) provided a comprehensive review elucidating the crucial regulatory role of SCFAs as metabolites produced by the gut microbiota on the musculoskeletal system and its mechanisms.Nilsson et al. (2018) conducted a randomized, placebo-controlled, double-blind clinical trial to investigate whether Lactobacillus reuteri reduces bone loss in elderly women.The study results revealed that daily oral administration of 10 10 colony-forming units of Lactobacillus reuteri for 12 months significantly decreased bone loss in women aged 75 to 80 compared to the placebo, particularly with nearly half the decrease in The results revealed an association between decreased bone mineral density in individuals with osteopenia and osteoporosis and alterations in the gut microbiota.These changes may serve as biomarkers or therapeutic targets for reducing bone mineral density in high-risk individuals.Summarizing the recent burst of citations, it is evident that probiotic therapy, especially Lactobacillus reuteri, for intervention in osteoporosis-related high-quality clinical studies, has garnered significant attention.However, the differential effects of various types of prebiotics or probiotics and whether there are gender, age, and etiological differences in the treatment of osteoporosis remain unclear.Future large-scale, multicenter clinical studies are needed to validate the actual effectiveness of adequate intake of prebiotics or probiotics on human bone health. In addition to burst citation analysis, this study also employs CiteSpace software to conduct cluster analysis of citations and arrange the obtained cluster label information in chronological order.The timeline of citation cluster analysis in the field of gut microbiota and bone metabolism is depicted in Figure 7.In this figure, cluster labels are named using the log-likelihood ratio (LLR) algorithm, with the homogeneity and modularity parameters being important indicators for assessing the rationality of cluster labels.Both parameters range from 0 to 1, where a Modularity value greater than 0.3 indicates significant modularity, and a Silhouette value greater than 0.7 indicates highly credible clustering effects.In the clusters obtained in this study, Silhouette = 0.9 and Modularity = 0.81, indicating a high degree of homogeneity and modularity in the clusters obtained in this study, suitable for further analysis.Among all cluster labels, those starting with #0 generally contain the highest number of citations, with lower numbers indicating a higher quantity of citations in the cluster.From Figure 7, it can be observed that clusters labeled "double-blind clinical trials" and "postmenopausal osteoporosis" occupy the top two positions in terms of the number of citations included, indicating that clinical trials and postmenopausal osteoporosis are the most researched directions in this field.Furthermore, the representation of the timeline intuitively demonstrates the dynamic trends of citation cluster labels over different periods.Based on the average appearance time of different labels, it can be inferred that cluster #1 postmenopausal osteoporosis, Analysis of reference bursts. #5 probiotic mixtures, and #7 osteoarthritis progression are the hot topics of interest in this field. In recent years, beyond the established focus on osteoporosis, the role of gut microbiota in the pathogenesis and progression of osteoarthritis has garnered increasing attention (Amin et al., 2023).Substantial research indicates that dysbiosis of the gut microbiota can influence the advancement of osteoarthritis through multiple mechanisms, including the regulation of trace elements (such as iron, zinc, and magnesium), participation in immune responses, and disruption of metabolic processes (Collins et al., 2015;Ulici et al., 2018;Celis and Relman, 2020).For instance, Guss et al. (2019) demonstrated that in a mouse model of osteoarthritis, load-induced cartilage and subchondral bone lesions significantly altered the abundance of Bacteroides and Firmicutes.Similarly, Collins et al. (2015) found notable changes in the abundance of Lactobacillus in the gut microbiota of obese rats induced by a high-fat diet; these changes were significantly associated with levels of inflammatory markers in joint fluid and blood, as well as Mankin scores in the osteoarthritis rat model. Furthermore, studies have shown that dysbiosis of the gut microbiota can lead to an increase in lipopolysaccharide (LPS)producing pathobionts, resulting in elevated LPS levels in the bloodstream.This excessive LPS can hyperactivate the immune system, triggering severe inflammatory responses (Arbeeva et al., 2022;Loeser et al., 2022).Previous research has indicated that LPS is directly involved in the osteoarthritis disease process and is significantly correlated with the severity of osteophyte formation, thus suggesting that LPS could serve as a biomarker for osteoarthritis severity (Huang et al., 2016).It can be hypothesized that gut microbiota dysbiosis exacerbates osteoarthritis progression by increasing gut permeability and compromising the intestinal barrier, thereby promoting systemic low-grade inflammation through elevated LPS production. In addition to inflammatory mechanisms, both clinical and experimental studies suggest that the gut microbiota is a critical factor in metabolic syndrome pathogenesis.Dysbiosis can lead to metabolic disturbances and hormonal imbalances, resulting in conditions such as insulin resistance, hypertension, and central obesity (Croci et al., 2021).Research calculating the combined risk ratios for metabolic syndrome predicting osteoarthritis and vice versa reveals a bidirectional relationship between these conditions (Liu et al., 2020).Consequently, gut microbiota dysbiosis may contribute to osteoarthritis development via metabolic disorder pathways.Given that current research indicates that probiotics, prebiotics, and microbiota transplantation can ameliorate gut microbiota dysbiosis, the gut microbiota presents a promising target for future interventions in osteoarthritis.For instance, Schott et al. (2018) observed that a significant reduction in Bifidobacterium levels in obese mice led to downstream systemic inflammation signals, with macrophages accumulating in the joint synovium, thereby accelerating osteoarthritis progression.Restoring gut microbiota with oligofructose supplementation resulted in decreased systemic inflammation and alleviated arthritis symptoms.Similarly, Sim et al. (2018) demonstrated that intervention with Butyricicoccus in a rat model of knee osteoarthritis significantly reduced serum inflammatory markers such as IL-6 and COX-2 while increasing glycosaminoglycan and IFN-g levels, effectively reducing fibrotic tissue formation in the knee joint.Other studies have shown that treatment with Bifidobacterium in guinea pig models of osteoarthritis significantly reduced cartilage structure damage and type II collagen degradation (Henrotin et al., 2021).It is important to note that most of these studies are limited to animal models.The efficacy of interventions such as probiotics, prebiotics, and microbiota transplantation in osteoarthritis patients requires extensive clinical trials for confirmation. Analysis of keywords High-frequency keywords serve as significant indicators of current hotspots in research areas.From a corpus of 893 articles, this study extracted a total of 325 keywords appearing more than 5 times.Figure 8A illustrates the co-occurrence map of these high- Timeline network map of reference co-citation analysis. SCFAs and bone metabolism Taking SCFAs as an exemplar, SCFAs are a class of metabolic byproducts generated during carbohydrate fermentation by gut microbiota, primarily comprising acetate, propionate, and butyrate.These SCFAs, products of microbial metabolism, such as acetate produced by Akkermansia muciniphila in the colonic mucosal layer, and butyrate fermentation by bacteria like Eubacterium hallii and Faecalibacterium prausnitzii (Derrien et al., 2004;Shetty et al., 2018;Verstraeten et al., 2024), have been shown in previous studies to impact bone metabolism through various pathways.For instance, SCFAs can lower intestinal pH, regulate the expression of calcium transport proteins in intestinal epithelial cells, and facilitate calcium absorption (Whisner et al., 2016).Acetate or butyrate salts produced by gut microbiota can also enhance the expression of calcium-binding protein D9k, significantly promoting intracellular calcium utilization (Nath et al., 2018).Furthermore, SCFAs participate in the secretion regulation of relevant hormones (IGF-1 and glucagon-like peptide-1) to modulate osteoblast and osteoclast differentiation.As previously mentioned, Yan et al. (2016) found that after conventional mice were intervened with broad-spectrum antibiotics or vancomycin, the concentration of SCFAs in the cecum significantly decreased, accompanied by a notable reduction in IGF-1 levels.However, supplementing mice with SCFAs restored circulating IGF-1 levels to normal, indicating a significant role of SCFAs in increasing serum IGF-1.Psichas et al. (2015) discovered that injecting acetate into the colon of wild-type mice significantly increased glucagon-like peptide-1 levels, whereby SCFAs mainly interact with G protein-coupled receptors FFAR2 and FFAR3 on enteroendocrine L cells to promote glucagon-like peptide-1 secretion.It is worth noting that an increasing body of research has found that SCFAs possess immunomodulatory functions, influencing bone metabolism through T and B cell immune mechanisms (Keirns et al., 2020;Massy and Drueke, 2020;Mohamed et al., 2024).On one hand, SCFAs could significantly promote differentiation of intestinal TH17 cells and secretion of various inflammatory factors to induce osteoclastogenesis.On the other hand, SCFAs could induce CD4 + T cells to differentiate into Treg cells, activating the Wnt signaling pathway in osteoblasts to promote bone formation.In summary, SCFAs could regulate bone metabolism through various pathways such as modulating intestinal calcium absorption, endocrine pathways, and immune regulatory mechanisms.However, there is still some controversy regarding the mechanisms by which SCFAs promote osteoblast differentiation, with some studies suggesting that SCFAs can promote glucagon-like peptide-1 secretion, while others suggest they can inhibit it (Psichas et al., 2015;Larraufie et al., 2018).Further in vivo and in vitro experiments are needed to elucidate the regulatory mechanisms of SCFAs on bone metabolism. Anti-osteoporosis drugs and gastrointestinal microbiota In addition, the effects of anti-osteoporosis drugs on the gut microbiota are complex and multifaceted.In recent years, an increasing number of studies have focused on the interactions between bone metabolism and the gut microbiota, as well as the role anti-osteoporosis drugs play in this process.Previous studies have found that certain drugs, such as cinnamic acid (Hong et al., 2022), chondroitin sulfate calcium complex (Shen et al., 2021), and Yigu decoction (Zhang et al., 2023), can increase the number of beneficial bacteria, thereby improving gut health and promoting bone health by reducing the production of inflammatory mediators.Taking chondroitin sulfate calcium complex as an example, it is a commonly used drug for the treatment of bone and joint diseases.Shen et al. (2021) found that intervention with chondroitin sulfate calcium complex could alleviate osteoporosis caused by estrogen deficiency.This effect may be associated with the treatment's ability to increase the abundance of Acidobacteria, Chloroflexi and Gemmatimonadetes, while decreasing the abundance of Bacteroidetes, Actinobacteria and the B/F ratio at phylum level, along with changes in specific gut microbiota communities at the genus level.However, most studies remain at the animal testing stage, and more clinical cohort studies are needed in the future to confirm these findings.Therefore, more large-scale, high-quality clinical cohort studies are needed in the future to verify the impact of these drugs on human gut microbiota and bone health.Additionally, researchers need to explore the optimal dosage, methods of administration, and potential side effects of these drugs to ensure their safety and efficacy in clinical applications. Limitations The limitations of this study are mainly as follows.First of all, similar to other bibliometric studies, data for this analysis were retrieved only from the WoSCC other than databases like Scopus, PubMed, or Google Scholar.Previous studies showed that WoSCC was the most authoritative and commonly used database with high reliability for bibliometric studies (Jin et al., 2023;Wan et al., 2023).Moreover, recently published high-quality papers may not be identified because they have not received enough citations, which may lead to discrepancies between insights from bibliometric analysis and ongoing real-world advancements. Conclusion Overall, the gut microbiota and its metabolites are closely related to bone metabolism, exerting influences on both osteoblastic and osteoclastic differentiation through various pathways.Further exploration of the impact mechanisms of gut microbiota on bone metabolism could provide more therapeutic targets for a variety of bone metabolic diseases.In this study, we conducted, for the first time, a comprehensive bibliometric analysis of the overall knowledge framework and research status in the field of gut microbiota and bone metabolism.Among the 893 articles finally selected, the trend analysis of annual publication volume clearly indicates that this field is attracting increasing attention.Regarding major contributors, China and the United States undoubtedly dominate, mainly reflected in publication quantity and H-index.The top 3 institutions in terms of publication volume are Emory University, Harvard University, and the University of California.The journal Nutrients has the highest number of relevant publications, followed by Frontiers in Microbiology and PLOS One. Moreover, the Journal of Bone and Mineral Research, PLOS One, and Nature are the most influential journals in this field.As for research directions, Nutrition Dietetics, Microbiology, and Endocrinology Metabolism are the top three directions in terms of publication volume.Analysis of highly cited literature and citation analysis results indicate that the current research focus is on the use of probiotics or probiotic mixtures to regulate bone metabolism by improving host immune status.Probiotic therapy, especially Lactobacillus reuteri intervention for osteoporosis treatment, and the role of gut microbiota in the onset and progression of osteoarthritis have received considerable attention.Keyword analysis results reveal that current hot topics mainly include osteoporosis, bone density, probiotics, inflammation, SCFAs, metabolism, osteoarthritis, calcium absorption, obesity, double-blind, prebiotics, mechanisms, postmenopausal women, supplements, risk factors, oxidative stress, and immune system.Future research themes worthy of further attention include inflammation, obesity, SCFAs, postmenopausal osteoporosis, skeletal muscle, oxidative stress, double-blind trials, and etiological mechanisms.In conclusion, this study provides a comprehensive bibliometric analysis of gut microbiota and bone metabolism research from a global perspective, offering valuable reference data for scholars, particularly young researchers, engaged in similar studies in this field. also found that filamentous bacteria isolated from the mouse gut could promote Th17 cell differentiation.Additionally, the dynamic balance between Th-17 and Treg cells serves as a crucial target for gut microbiota.Studies have shown that Lactobacillus acidophilus could enhance the secretion of anti-inflammatory factors such as IL-10 and TGF-b by adjusting the Th-17/Treg ratio, thereby reducing osteoclast proliferation and bone resorption (Dar et al., 2018b).Dar et al. (2018a) demonstrated that Bacillus clausii could increase Treg cellpromoted bone formation in ovariectomized mouse models.Moreover, Czernik et al. (2021) proved that Lactobacillus rhamnosus could promote bone synthesis metabolism by regulating Wnt10b generation mediated by Treg cells.In summary, immune cells serve as a link between gut microbiota and bone metabolism, modulating bone formation or resorption through the regulation of T or B cell functions.With the gradual emergence of the concept of osteoimmunology in recent years, gut microbiota serve as a vital link in exploring the connection between bone and the immune system, offering additional targets for the FIGURE 2 FIGURE 2 FIGURE 3 (A) The foremost 10 journals in the field of gut microbiota and bone metabolism publications.(B) A network analysis of co-cited journals.The size of nodes reflects the cumulative citation counts of the respective journals.(C) The top 10 research directions with the highest publication output.(D) Dual-map overlay of journals. , the United States dominated the early stages of publications in this field, but in recent years, China has surged ahead, even surpassing the United States in annual publication output by more than half.It can be inferred that the comparatively lower H-index of China, as compared to the United States, may primarily stem from the recent publication of many studies, which have yet to accumulate a sufficient number of citations.Distinguished research institutions and scholars also play a pivotal role in generating high-quality output.Analysis of the top 10 institutions by publication volume reveals the presence of six American institutions, three Chinese institutions, and one Swiss institution.Notably, the top 3 institutions in terms of publication volume, Emory University, Harvard University, and the University of California, are all situated in the United States (Figure 4D), underscoring a potential key factor contributing to the United States' sustained high-quality output.Furthermore, Figure 4C illustrates an analysis of international collaboration, where thicker connecting lines between two countries indicate closer research partnerships.Notably, close research collaboration is evident between China and the United States.In addition, we also we summarized the authors with the highest number of publications in this field.Pacifici R from Emory University contributed the most research with 15 publications (ACI=80.13),followed by Mccabe LR from Michigan State University (ACI=102.36),Ohlsson C from University of Gothenburg (ACI=77.09),Parameswaran N from Michigan State University (ACI=92.91),all of which with 11 publications. FIGURE 4 (A) Distribution of the top 10 countries in terms of publication output.(B) Annual publication trends of the top 10 countries in publication output.(C) Analysis of collaboration between countries.(D) Distribution of the top 10 institutions in publication output.(E) Top 5 funding agencies ranked by funding support. . The seventh highly cited literature by Lahiri et al. (2019), published in Sci Transl Med, observed decreased muscle mass and signs of muscle atrophy in germ-free mice compared to conventionally raised mice.Reduced IGF-1 expression in muscle tissue and significant downregulation of genes associated with skeletal muscle growth and mitochondrial function were noted.Treatment of germ-free mice with SCFAs partially reversed skeletal muscle damage by preventing muscle atrophy and increasing muscle strength.The study underscores the crucial role of the gut microbiota in regulating mouse skeletal muscle mass and function, proposing the concept of the gut microbiota-skeletal muscle axis.The eighth-ranked study found FIGURE 5 FIGURE 5 distal tibial total bone mineral density observed in the placebo group.Schepper et al. (2020) demonstrated that gut microbiota and intestinal barrier function are involved in glucocorticoid-induced osteoporosis, primarily through mechanisms such as Wnt10b inhibition and osteoblast apoptosis, identifying the gut as a novel therapeutic target for preventing glucocorticoid-induced osteoporosis.Another study by Schepper et al. (2019) confirmed that antibiotic treatment results in dysbiosis of the intestinal microbiota and increased intestinal permeability, significantly reducing trabecular bone volume in the femur.Lactobacillus reuteri, rather than Lactobacillus rhamnosus, prevents antibiotic-induced dysbiosis of the gut microbiota and loss of femoral trabecular bone.This study further emphasizes the role of intestinal microbial dysbiosis-induced changes in intestinal permeability in regulating skeletal health and identifies Lactobacillus reuteri as a novel therapy for preventing antibiotic-induced microbial dysbiosis.A randomized, double-blind, placebo-controlled, multicenter trial published by Jansson et al. (2019) in the Lancet Rheumatology in 2019 continues to receive attention.The study compared the effects of three probiotics (Lactobacillus paracasei DSM 13434, Lactobacillus plantarum DSM 15312, and Lactobacillus plantarum DSM 15313) combination therapy on lumbar spine bone mineral density in postmenopausal women.The results showed that the combined treatment with the three lactobacilli significantly reduced lumbar spine bone loss in postmenopausal women after 12 months.Das et al. (2019) analyzed fecal microbial profiles of 181 individuals with osteopenia, 60 with osteoporosis, and 60 with normal bone mineral density. FIGURE 7 FIGURE 7 (A) Visualization density map of keyword co-occurrence analysis.(B) Analysis of keyword bursts. TABLE 1 Top 10 most cited articles.
v3-fos-license
2022-12-03T15:01:46.795Z
2020-11-19T00:00:00.000
254167617
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10266-020-00569-x.pdf", "pdf_hash": "abca70a13595ce30258073798130f19614a3850e", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1260", "s2fieldsofstudy": [ "Medicine" ], "sha1": "abca70a13595ce30258073798130f19614a3850e", "year": 2020 }
pes2o/s2orc
Morpho-functional effects of different universal dental adhesives on human gingival fibroblasts: an in vitro study To analyze the effects of four universal adhesives (Optibond Solo Plus—OB, Universal Bond—UB, Prime&Bond Active—PBA, FuturaBond M + —FB) on human gingival fibroblasts in terms of cytotoxicity, morphology and function. After in vitro exposure for up to 48 h, fibroblast viability was determined by the MTT assay determined, morphology by phase-contrast microscopy and migration by the scratch wound assay. Expression levels of IL1β, IL6, IL8, IL10, TNFα and VEGF genes were assessed by RT-PCR and their protein production by Western blot analysis. Apoptosis and cell cycle were analyzed by flow cytometry. OB and UB induced early morphological changes on fibroblasts (3 h) with extended cell death at 24 h/48 h. Gene expression of collagen type I and fibronectin increased fivefold compared with controls, elastin disappeared and elastase increased threefold, indicating gingival tissue tended to become fibrotic. Only UB and OB increased gene expression of inflammatory markers: IL1β at 3 and 48 h (up to about three times), IL6 and IL8 at 3 h (up to almost four times) which corresponded to the increase of the activated form NF-kB. All adhesives showed an effect on the functionality of fibroblasts with cytotoxic effect time and concentration dependent. Among all the OB and UB adhesives, they showed the greatest cell damage. The in-depth analysis of the effects of universal adhesives and possible functional effects represents an important information for the clinician towards choosing the most suitable adhesive system. Introduction Universal adhesives were developed to solve clinical and practical problems in conservative dentistry allowing to obtain shortening [1] or fewer procedures and less manipulation during acid conditioning [2]. Also known as "multimode" adhesives, they may be used to self-etch on dentin or to etch and rinse on enamel, according to the type of caries and the clinician's choice [3,4]. On the other hand, major disadvantages are a shallower enamel etching depth, greater discoloration of enamel margins and shorter adhesion duration than with a separate orthophosphoric acid step [5,6]. The different results in terms of cytotoxicity could be related to a different chemical composition. All adhesives contain monomers that may be hydrophilic (2-hydroxyethyl methacrylate (HEMA), 4-methacryloyloyloloethethy trimellitate anhydride (4-META) or hydrophobic like solvents, acetone or ethanol [28]. Universals have additional copolymers, silane [28] and carboxylate or phosphate monomers, such as methacryloyloxydecyl dihydrogen phosphate (MDP). MDP interacts with calcium and the precipitate occludes the tubules, helping to increase chemical adhesion [2]. Major changes in adhesion strategy over time might be another factor in their cytotoxicity. The total etch technique in the 5th generation was replaced by self-etching systems (6th and 7th generations) in which acid monomers partially demineralize the smear layer and underlying dentine [8]. Our preceding study showed that even though all adhesives in a series of universals were associated with a certain level of general toxicity, the behavior profiles were not the same over time and in varying dilutions [29]. Consequently, we postulated that factors other than cytotoxicity came into play in determining adhesive biocompatibility and turned our attention to the adhesive effects on fibroblasts as they are the predominant cell type in periodontal connective tissue [10,30]. They are hypothesized to play a major role in modulating inflammatory process, as they activate, proliferate and secrete cytokines to counteract cell damage due to external stimuli, thus inducing inflammation [31]. Furthermore, external stimuli can alter normal fibroblast secretion of extracellular matrix (ECM) proteins. The present investigation focused on early fibroblast responses to dental materials by analyzing the effects of four dental adhesives on morphology and function in terms of viability, apoptosis, the balance of pro-and anti-inflammatory markers and ECM molecule secretion and degradation. Its first null hypothesis was that there are no significant differences in cytotoxicity in the four selected adhesives. The second null hypothesis was that contact between adhesives and fibroblasts does not determine any morpho-functional alteration of the gingival fibroblasts. Test materials Starting from a previous study regarding cytotoxicity of dental adhesive on oral cell populations [29], four of them with self-etching and total etching techniques were examined: Optibond Solo Plus (OB; Kerr Corporation, Orange, United State), Universal Bond (UB; Tokuyama Corporation, Tokyo, Japan), Prime&Bond Active (PBA; Dentsply De Trey, Konstanz,Germany) and Futurabond M + (FB; Voco GmbH, Germany). Components, classification and manufacturer's information are listed in Table 1. Cell culture Human gingival stroma fibroblasts BSCL138 (IZSLER, Brescia, Italy) were grown as monolayer cultures in sterile polystyrene T-75 flasks (Thermo Fisher Scientific, Waltham, MA USA) in a humidified incubator at 37 °C with 5% CO 2 and twice-weekly changes of medium. The cultures were monitored under a phase-contrast Leitz inverted microscope. The culture medium, Eagle's Minimum Essential Medium (MEM, Thermo Fisher Scientific, Waltham, MA USA) was supplemented with 10% fetal bovine serum (FBS, Thermo Fisher Scientific, Waltham, MA USA), penicillin (100U/ml), streptomycin (100 mg/ml) and 25 μg/ml amphotericin B as anti-fungal agent (Thermo Fisher Scientific, Waltham, MA USA). Upon 80% confluence (logarithmic growth phase), 1 3 cells were detached with a mixture of 0.25% trypsin and 0.02% ethylenediaminetetraacetic acid (EDTA). Cells were counted in a Countess Automated Cell Counter (Thermo Fisher Scientific, Waltham, MA USA) after 1:1 dilution in Trypan Blue Dye (10 μl of cells and 10 μl of Trypan Blue), and then plated as described below. All tests were performed between the seventh and ninth passage. Adhesive extract preparation Dental adhesives (10 μl) were dropped centrally on the upper side of sterile glass discs (12 mm diameter × 0.15 mm depth, ExactaOptech Labcenter SpA, Modena, Italy), the solvent was evaporated with air spray without water and oil in according to the manufacturer's instructions and samples were photocured (Bluephase ® G2, Ivoclar Vivadent AG, Schaan Liechtenstein). Light intensity was set to 300 mW/ cm 2 for OB ad PBA and 500 mW/cm 2 for FB; the distance between bonding agents and the light-curing lamp tip was under 2 mm. Some dark custom-made spacers served to maintain an established distance between the light-curing tip and sample surface and to eliminate external irradiation sources. The polymerization times for adhesive materials were in accordance with the manufacturers' instructions: 20 s for OB and 10 s for PBA and FB. The adhesives on glass discs were topped with 1 ml of MEM (extract) containing 10% FBS, the anti-fungal agent (amphotericin B) and antibiotics (penicillin and streptomycin) for 24 h at 37 °C and 5% CO 2 . Extracts were filtered through 0.22 μm cellulose acetate filters (Merck Millipore, Germany) and serially diluted before use [29]. Collected extracts (culture medium + components leached from adhesives) were added to the cells undiluted or in serial dilutions. After treatment, 10 μl of MTT solution (5 mg/ml) was added to each well. Plates were covered and incubated for 4 h at 37 °C. MTT-derived formazan crystals were dissolved by adding 100 µl/well of dimethyl sulphoxide (DMSO, Sigma Chemical Co., St. Louis, MO) under gentle shaking According to ISO 10993-5 [33], fewer viable cells resulted in decreased mitochondrial enzyme activity (succinic dehydrogenase, SDH) which directly correlated with the amount of blue-violet formazan produced by the tetrazolium salt reduction. Absorbance values in the control group and the percentage of viable cells were compared. Cell viability was calculated according to the following formula using optical density (OD): % cell viability = (OD ratio of test group/OD ratio of control group) × 100. Cell morphology To determine the effects of extracts on cell morphology, human gingival fibroblasts were seeded at a density of 1 × 10 5 cells/ml in 1.9 cm 2 wells (Thermo Fisher Scientific, Waltham, MA USA) and maintained in MEM supplemented with 10% FBS, the anti-fungal agent (amphotericin B) and antibiotics (penicillin and streptomycin) until sub-confluence. The culture medium was then discarded and replaced with 1 ml of undiluted extracts. Control groups were treated with fresh culture medium. Cell cultures were incubated for another 1, 3 and 48 h at 37 °C in 5% CO 2 before observation under a phase-contrast microscope (Nikon Eclipse MS100, Nikon Corporation, Tokyo, Japan). Scratch assay To investigate fibroblast migration, cells were plated on 6-well flat bottom microtiter plates (Thermo Fisher Scientific, Waltham, MA USA) and grown in 2 ml growth medium. Once about 90% confluence was reached, medium was removed and a straight scratch along the monolayer was created in the centre of the well using a sterile P-200 pipette tip, as described elsewhere [34]. Cellular debris was gently removed with Dulbecco's phosphate-buffered saline (PBS) and cultures were exposed to undiluted extracts. Images of wound closure were obtained at 0, 18, 24 and 48 h, using a conventional phase-contrast microscope (Olympus, Tokyo, Japan). Photographs were taken at 200× magnification to obtain cell behavior profiles of migration and morphology. RNA isolation and RT-PCR analysis Human gingival fibroblasts were seeded (1 × 10 5 cells/ml) in 6-well flat bottom microtiter plates (Thermo Fisher Scientific, Waltham, MA USA). After reaching confluence, cells were treated with undiluted adhesive extracts or fresh medium (control groups) for 1, 3 and 48 h to assess gene expression of inflammatory markers and ECM proteins and for 24 h to analyze apoptosis and cell cycle genes. Total RNA was isolated as described elsewhere [35]. Briefly, RNA from control and treated fibroblasts was isolated using a total RNA purification kit (Thermo Fisher Scientific, Waltham, MA USA), and quantified by reading the optical density at 260 nm on a BioPhotometer (Eppendorf, Milano, Italia). Then, 1 μg of total RNA was subjected to reverse transcription (RT) in a final volume of 20 μl using ABM (Richmond, Canada). Real-time PCR was performed using 2 μl of cDNA from the RT reaction. The primer sequences of each gene are listed in Table 2. Primers were designed with PERL primer software using NCBI Entrez-Gene reference sequences as template and synthesized by Thermo Fisher Scientific. Real-time PCR was carried out in an Mx3000P cycler (Stratagene, Amsterdam, Netherlands) using FAM for detection and ROX as a reference dye. Onestep PCR was performed in 25 ml of Brilliant SYBR(r) Green QPCR Master Mix (Stratagene, Amsterdam, Netherlands) according to the manufacturer's instructions. At each annealing step, product formation was monitored with the fluorescent double-stranded DNA-binding dye SYBR(r) Green. The relative expression level of the housekeeping gene glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used to normalize marker gene expression in each sample. Immediately after PCR, a melting curve was undertaken by raising the incubation temperature from 55° to 95 °C to confirm amplification specificity. The expression was determined using the threshold cycle (Ct), and relative expression levels were calculated via the 2 −∆∆Ct method. All values were computed with the MxPro QPCR Software (Stratagene, Amsterdam, Netherlands). Protein extraction and western blot analysis Human gingival fibroblasts were seeded (1 × 10 5 cells/ml) in 6-well flat bottom microtiter plates (Thermo Fisher Scientific, Waltham, MA USA) and, after reaching confluence, treated with undiluted adhesive extracts or fresh medium (control group) for 1, 3 and 24 h. After treatment, fibroblasts were washed twice with ice-cold PBS and detached with trypsin/EDTA solution as described above. They were then covered with MEM, centrifuged at 1200g for 5 min at 4 °C and washed twice with PBS. Total proteins were extracted by lysing the cells with radioimmunoprecipitation assay (RIPA) lysis buffer (HiMedia Laboratories, Einhausen, Germany) supplemented with phosphatase inhibitor cocktails and EDTA 1X. Lysates were left for 30 min on ice, vortexed every 10 min and stored at − 20 °C overnight. Finally, samples were centrifuged at 12,000g for 10 min at 4 °C and the supernatants (total protein) were collected [36]. Protein concentrations in the cytosolic extracts were quantified using the Bio-Rad assay; 30 μg per lane was loaded on 12% SDS-PAGE and transferred on to nitrocellulose membranes. To reduce nonspecific binding, membranes were blocked with 5% (w/v) no-fat dried milk in T-TBS (TBS containing 0.1% Tween-20) for 1 h at room temperature. After blocking, membranes were incubated overnight at 4 °C under gentle agitation with each primary antibody: rabbit anti P-NF-kB-p65(Ser536) polyclonal antibody (1:250) in 5% milk, rabbit anti NF-kB-p65 polyclonal antibody (1:1000) in BSA, or rabbit anti-cathepsin B polyclonal antibody (1:750) in BSA. All antibodies were purchased from Elabscience (Houston, Texas, USA). Membranes were stripped and re-probed with mouse antiβ-actin mAb antibody (1:5000) as a loading control. After washing twice in T-TBS, membranes were incubated with horseradish peroxidase (HPR)-labeled anti-rabbit or antimouse (both 1:5000) secondary antibodies for 1 h at room temperature. Immunoreactive proteins were detected using the enhanced chemiluminescence system (ECL, Amersham Pharmacia, Milan, Italy) and quantified with an image analyzer (ChemiDoc, Biorad, California, USA). Apoptosis and cell cycle analysis Apoptosis and cell cycle analysis were assessed by flow cytometry as previously described [37]. Briefly, controls and fibroblasts were harvested after 24 h, re-suspended in 0.5 ml hypo tonic propidium iodide (PI) solution (50 µg/ml propidium iodide in 0.1% sodium citrate plus 0.1% Triton X-100) and analyzed by flow cytometry using Coulter Epics XL-MCL Flow Cytometer (Beckman Coulter). Data were analyzed using FlowJo software (TreeStar). Statistical analysis Figures report the mean ± SD (standard deviation) of three independent experiments performed in quintuplicate for each dental adhesive. One-way analysis of variance (ANOVA) was performed using GraphPad Prism 5.01 software (Prism, CA, USA). p values of < 0.05 were considered statistically significant. Cytotoxicity assay (MTT) All undiluted adhesive extracts were associated with timedependent SDH activity. It increased over the short term s (1, 3, 6 h) but was reduced long term (from 24 to 72 h). The drop was most marked in FB and UB extracts at 72 h (37% and 49%, respectively). As extracts were diluted, short-term stimulation and long-term inhibition of cell viability were less marked (p = ns) (Fig. 1). Cell morphology Under a phase-contrast microscope, controls always (1, 3, 48 h) displayed a continuous monolayer of viable fusiform-shaped cells. After 1 h, elongate morphology was unchanged in all four extracts. After 3 h, wide intercellular spaces (low density cellular sheet) were observed and cells showed a prevalently spindle or irregular shape, with less defined borders and many threadlike extensions. After 48 h, cell numbers dropped and numerous detachable, round cells were detected, indicating that adhesive extracts had a toxic effect. All these changes were more marked in cells that were treated with OB and UB extracts (Fig. 2). Scratch assay With all dental extracts, the scratch was not still closed at 48 h, unlike controls. After 18 h only FB and PBA were associated with scratch closing but narrowing was less than in controls and the scratch was still visible at 24 h. At all timepoints, cells in all extract samples were multiform and longer than controls, with gradually enlarging intercellular spaces (Fig. 3). Gene expression of inflammatory markers To test the impact of the adhesive extracts on inflammatory processes, we analyzed the expression levels of IL-1β, IL-6, IL-8, IL-10, TNFα and VEGF genes by RT No adhesive changed IL-10 and TNFα expression at any timepoint (data not shown). All adhesives upregulated VEGF expression at 3 h (Fig. 4). Gene expression of ECM proteins PBA upregulated collagen I and MMP1 collagenase expression significantly (p < 0.05 and p < 0.001 respectively) after 1 h. OB significantly increased only collagen I transcription after 1 h (p < 0.001). All adhesive extracts significantly increased collagen I and MMP1 collagenase expression from 3 h onwards. Significance was more marked at 48 h (p < 0.001). FB and PBA stimulated fibronectin transcription at 1 h (p < 0.001), returning to baseline at 3 h and persisting there at 48 h. UB and OB upregulated fibronectin significantly at 3 h, reaching more marked significance at 48 h (p < 0.001). No adhesive changed MMP2 mRNA (gelatinase) expression significantly at any timepoint. Western blot analysis Western blot analysis investigated the effects of four adhesive extracts on cathepsin B and transcription factor NF-kB-p65 expression and activation (p-NF-kB-p65), both involved in the inflammatory pathway, after 1, 3 and 24 h. One hour after treatment, UB and OB inhibited p-NF-kB expression compared with controls. PBA and FB inhibited expression more weakly (Fig. 6a, b). After 3 h, UB and OB continued inhibition of p-NF-kB expression. Only PBA 1 3 Fig. 1 Effects of dental adhesive extracts (diluted and undiluted) on human gingival fibroblasts using the MTT assay. The results for each extract are expressed as the percentage of SDH activity compared with the control (100%). The values represent the mean ± SD of three independent experiments performed in quintuplicate for each sample. Differences vs. control: *p < 0.05; **p < 0.001 significantly increased expression twofold compared with control (Fig. 6b). After 24 h, all adhesives significantly upregulated p-NF-kB expression. Compared with controls, upregulation ranged from twofold with UB and OB to threefold for PBA and sevenfold for FB (Fig. 6b). After 1 h, all adhesives significantly increased NF-kB expression (Fig. 6c). After 3 h, the increase was more marked but dropped below control levels at 24 h (Fig. 6c). After 1 h, all adhesives significantly increased cathepsin B expression compared with controls. The rise was twofold with UB and OB (Fig. 6d). At 3 h, UB and OB significantly decreased cathepsin B expression compared to 1-h levels and controls. PBA and FB increased expression (Fig. 6d). At 24 h, all adhesives except PBA significantly decreased cathepsin B expression (Fig. 6d). Apoptosis and cell cycle Compared with controls, FB, OB and UB significantly inhibited fibroblasts in the G0/G1 phase for 24 h, impairing progression to G1-S phase transition. Real-time PCR showed that p16 expression was unchanged, while p21 expression was upregulated at 24 h (Fig. 7). UB and OB upregulated fibroblast apoptosis slightly at 24 h, together with p53 and Bcl-2 expression. PBA did not change apoptosis. It maintained Bcl-2 at control levels and upregulated p53 at 24 h. FB did not modify apoptosis although it increased p53 and Bcl-2 significantly (Fig. 8). Discussion The present study was designed to assess the effects of 4 universal dental adhesives on the adaptative cell responses of human gingival fibroblasts. Contact with the adhesives altered the fibroblast morpho-functional status and migration capacity. Increased ECM protein transcription could indicate tissue evolution towards fibrosis. Interestingly, short-term contact was associated with enzyme stimulation and proinflammatory cytokine expression which was followed by a time-and dose-dependent cytotoxic effect. Present results showed that UB and OB impacted most on the human gingival fibroblast morpho-functional profile. In clinical dentistry, knowledge of the potential cytotoxicity of these adhesives is a fundamental requirement for their use [1]. The different PBA and FB response to oral cells, together with the biological knowledge of adhesives behavior, can be useful for clinicians in the material selections and clinical procedures and times. As far as we are aware, this is the first report of a full range of observations, some of which differ greatly from other studies. In an attempt to dissipate the confusion surrounding such divergent results, the present in-depth analysis established the effects of adhesives on gingival fibroblasts as they constitute the cell type that is most exposed to dental materials [20]. Others instead used murine cells, making comparisons difficult [15,25]. Negative [23] or positive [19,21,25] control systems could also confound comparisons as the present study used only untreated fibroblasts as controls. Assessing the impact of adhesives at different timepoints could also generate conflicting results. Although observation times generally covered 24 h [15,23], a few, like the present study extended timepoints to 48 h [8,22,25]. In the present study, short-and long-term assessment of cytotoxicity and five dilutions for the MTT test provided better information for both clinicians and researchers, showing cytotoxicity appeared to be dependent on adhesive concentrations and exposure times. In fact, cell viability was first increased and then gradually reduced in a time-dependent manner. The present study opted to use MTT which assesses cytotoxicity through mitochondrial activity because it is most frequently used in accordance with the ISO 10993-5 recommendations. Some studies assessed different parameters, e.g., the sulforhodamine B SRB assay [20], the lactate dehydrogenase assay (LDH) [14], the fluorescent V-FITC / PI live-dead staining assay [22] or the Hoechst33342 [38]. The MTT assay showed all adhesives stimulated SDH metabolic activity at 1 and 3 h which weakened with longer exposure, thus highlighting damage due to inhibition of normal cellular functions. Morphological analysis and wound healing showed all adhesives induced cell death at 48 h as demonstrated by the numerous round cells in suspension and by the few remaining adherents and prevented cell migration to the wound and wound closure. Further studies are needed to extrapolate these results to the clinical setting. Although MTT detects cytoxicity, it is non-informative on the mechanism of damage or cell death. In focusing on apoptosis, a common form of cell death, the present study found that unexpectedly, the high cell death rate was not due to apoptosis despite SDH activity, suggesting impaired enzyme function and possibly necrosis. Present observations that p53 and Bcl-2, two key apoptosis-related genes were Fig. 2 Time-dependent effects of adhesive extracts on fibroblast morphology. Phase-contrast micrographs of untreated human gingival fibroblasts (control) or fibroblasts exposed to undiluted extracts for 1, 3 and 48 h. Arrows indicate spindle cells with threadlike extensions (Bar = 10 μm) ◂ upregulated compared with untreated cells. [39]. In particular p53 seems to have an important role in the presence of dental monomers like TEGDMA [40]. Interestingly, cell cycle analysis revealed that all the dental adhesives inhibited fibroblasts in the G0/G1 phase and their transition to G1-S, correlating with upregulation of p21, an inhibitor of cell cycle progression at the G1 and S phases. Different studies focused on possible mechanism activated from adhesive monomers, for example, involving ROS production [41,42]. The major finding in the present study was that all dental adhesives modified inflammatory patterns. Contact with dental materials can, in fact, cause an inflammatory response [43,44] with over-production of inflammatory markers such as IL6, IL-8 [45,46], IL1β, IL-18 [47][48][49], all of which play major roles gingivitis and periodontal destruction [50,51]. Present observations showed all adhesives were associated with increased IL6 and IL8 expression after short-term exposure which dropped sharply at 48 h and increased IL1β at 48 h. We hypothesize this was due to their secretion in the extracellular compartment which might be an interesting starting point for future studies. In investigating underlying inflammatory pathways, our attention focused on NF-kB and cathepsin B which probably play different roles in regulating expression of inflammation mediators, such as IL1β, IL6 and IL8 [52,53]. We found that FB and PBA were linked with NF-kB-associated cytokine expression, while the same cannot be said for OB and UB. In these adhesives, where p-NF-kB regulation is lacking, a closer association with the cathepsin B pathway could be hypothesized. This possible different regulation in the inflammatory cytokine expression will be the subject of future studies. Since inflammation is known to influence ECM organization [54] we monitored the effects of adhesive extracts on transcription of ECM elements. Specifically, increased fibroblast adhesion as indicated by high fibronectin levels and excess collagen type I, which were observed with all adhesive extracts suggested promotion of fibrosis with consequent gingival tissue impairment [55,56]. A compensatory mechanism to reduce collagen accumulation was detected in greater transcription of MMP1 collagenase. Interestingly, fibronectin was reported to bind Fig. 3 Effect of undiluted adhesive extracts on cell migration in the wound-healing migration assay. a Representative phase-contrast images of the wounds were taken at 0, 18, 24 and 48 h (200× magnification). b Quantification of the percentage of closed wound area calculated by tracing the border of the wound using ImageJ software. Data represent the mean ± SD of three independent experiments. Differences vs. control: *p < 0.05; **p < 0.001 ◂ Fig. 4 Effect of undiluted adhesive extracts on gene expression of IL1β, IL6, IL8 and VEGF evaluated by RT-PCR at 1, 3 and 48 h. The results for each extract are expressed as fold-change in GAPDH nor-malized mRNA values. The values represent the mean ± SD of three independent experiments performed in triplicate for each sample. Differences vs. control: *p < 0.05; **p < 0.001 the TLR4 receptor, a member of the receptor family that regulates the NF-kB-dependent synthesis of cytokines [57]. Its link to TLR4 induced an inflammatory response in fibroblasts [58]. Thus increased fibronectin transcription could account for NF-kB activation shortly after treatment with adhesive extracts. Likewise, elastin/elastase trend may underlie a decreased elastic plasticity, which together with the collagen and fibronectin profiles trigger an increase in tissue fibrosis. Starting from the results of our study, the null hypotheses can be rejected. . The values represent the mean ± SD of three independent experiments performed in quintuplicate for each dental material. Differences vs. control: *p < 0.05; **p < 0.001 Fig. 8 Effect of adhesive extracts on apoptosis (a) and related gene expression (b). Human gingival fibroblasts were treated with undiluted extracts for 24 h. Cells were collected and stained with PI and analyzed by flow cytometry for percentage of apoptotic cells (a). Bcl-2 and p53 gene expression were evaluated by RT-PCR. The values represent the mean ± SD of three independent experiments performed in quintuplicate for each dental material. Differences vs. control: *p < 0.05; **p < 0.001
v3-fos-license
2020-12-17T09:04:36.741Z
2020-11-30T00:00:00.000
230722824
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://zenodo.org/record/4318628/files/WJARR-2020-0429.pdf", "pdf_hash": "86960f35ab9dae5f3a625949433b245a6b8e6eba", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1261", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4a37d108670146ebc7141c8958a9a8df97e4d843", "year": 2020 }
pes2o/s2orc
Tamoxifen resistance mechanisms in breast cancer treatment Therapies targeting estrogen receptor (ER) are being widely used to treat ER+ breast cancer patients. Despite early detection and improved survival outcomes, tamoxifen resistance, either intrinsic or acquiredis a major obstacle in effective disease management. Current review will summarize different molecular mechanisms of tamoxifen resistance both intrinsic and acquired in breast cancer treatment. This review not only provides basis to understand the nature of tamoxifen drug resistance but also suggests the mechanisms for its control leading to improved therapeutic interventions. Introduction Breast cancer (BC) is the most common type of cancer diagnosed in females. It is the second leading cause of mortality in women over the globe. The higher death rate corresponds to the metastatic occurrence of the breast cells invading the primary tissue and then colonizing the distant sites (Ferreira et al., 2020). The stimulation of the female reproductive hormones (including estrogen, progesterone, etc.), specifically during the period of breast development, is perhaps the reason for the increased susceptibility to breast cancer in women as compared to men (Brisken & O'Malley, 2010). Epidemiology Despite epidemiological and clinical advances in research, there is a continuous rise in the incidence of breast cancer (Qasim et al., 2020). A recent report has shown its impact to be 1 in 20 globally and 1 in 8 in high-income countries (Britt et al., 2020). Female's reproductive history, age, lifestyle patterns, genetic and environmental factors -all have a strong impact on the disease course (Youn & Han, 2020). Even with high incidence, the early detection and effective chemotherapeutic strategies have helped reducing the rate of mortality and improving the quality of life of the BC patients (Mubarik et al., 2020). Sub-types The immunohistochemistry (ICH) biomarkers such as estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2 or sometimes called HER2/neu) coupled with the conventional pathological variables including tumor grade, size, lymph node involvement, etc., are strictly considered for diagnosis, prognosis, and disease management in patients (Dai et al., 2015). Based on these receptor interactions and signaling, breast cancer is sub-classified as: The binding of estrogen hormone to the target protein-the estrogen receptor (ER) -mediates receptor phosphorylation, brings conformational changes in the active site and triggers receptor dimerization (Shou et al., 2004). This in turn activates transcription by facilitating the binding of the receptor complex to the promoter region of the ER associated genes encoding growth factors and other signaling molecules (Björnström & Sjöberg, 2005). Former literature reviews have found that the transcriptional activity of ER is modulated by protein-protein interactions of either co-activators or co-repressors (Shou et al., 2004). In case of breast cancer, the altered receptor conformation promotes transcriptional activation by the active recruitment of co-activators which is a stimulus for cell proliferation (M. Chang, 2012) wrong order should be Chang, 2012. This makes ER receptor a potent target in antiestrogen cancer therapy (Yaşar et al., 2016). First Line Endocrine Therapy for Breast Cancer Endocrine therapy has shown to be effective since decades for breast cancer. Herein, tamoxifen plays its fundamental role in reducing the disease incidence Metabolism of Tamoxifen Tamoxifen is a pro-drug which holds little affinity for its target ER protein (García-Becerra et al. 2013). It requires metabolic activation by CYP gene products which then convert inactive TAM into its active metabolites including 4hydroxy-tamoxifen (4OH-TMX) and N-desmethyl-4-1 hydroxy tamoxifen (endoxifen). These metabolites have more affinity for ER than tamoxifen itself (Ali et al. 2016) Mechanism of action 4 hydroxytamoxifen works as an estrogen receptor antagonist by inhibiting the transcription of estrogen responsive genes. This ER/4OH-TAM complex recruits NCoR and SMRT -the co-repressor proteins for the regulation of several key genes (Ring & Dowsett, 2004). Tamoxifen along with PAX2 protein exerts its anti-tumor effects by suppressing the expression of pro-proliferative protein, ERBB2 (Ali et al. 2016). In addition to anti-cancer activity of TAM, the intrinsic (de novo) or acquired resistance is a major challenge. Despite Acquired resistance The long-term or continuous therapy with tamoxifen allows the cells that were initially responding to the drug to acquire resistance (Viedma-Rodríguez et al., 2014). Growing evidence indicates that the unresponsiveness to tamoxifen in initially responsive breast tumor leads to breast cancer recurrence in many patients, most probably due to cancer stem cells (CSCs) (Ali et al. 2016). Though majority of the cells are killed but a few cells that evade treatment progress to a resistant phenotype. Many patients who report recurrence respond to a second line therapy (Russell et al. 2007). A better insight into the molecular basis of resistance mechanisms can provide novel strategies to bypass the limited therapeutic potential of tamoxifen and make advancements in cancer therapeutics (Hultsch et al. 2018). Mechanisms of Acquired Tamoxifen Resistance Due to the complexity of the signal transduction pathways involved, the exact mechanisms underlying acquired resistance remain elusive (Achinger-Kawecka et al. 2020). It is unlikely to say that a single gene or a specific molecular pathway features tamoxifen resistance in patients. Rather there exists a dynamic interplay between various pathways that mark cellular and molecular events leading to drug resistance (Mills et al. 2018). Loss of ER Function and Expression The ER antagonistic effect of tamoxifen is primarily based on receptor targeting. Loss of target ER confers resistance to endocrine therapy and contributes to tumorigenesis (Dorssers et al. 2001). Previous studies suggest that loss of ER expression predominates de novo tamoxifen resistance in cells (Clarke et al., 2003).Interestingly, loss of ER results in the change in phenotype from ER positive to ER negative (Chang, 2012). This may be attributed to the transcriptional inhibition of ER genes (Lewis & Jordan, 2005). Epigenetic modification such as histone deacetylation by HDAC (histone deacetylases) or hypermethylation of CpG islands by DNA methyltransferase (DNMT) is another major event that causes the silencing of ER genes, thereby reducing ER expression (H. G. Chang et al., 2005) order. It has been well documented that ER specific miRNAs play an endogenous effector role in RNA interference and repress the mRNA translation of ER-alpha subdomain (Chang, 2012) order. Hyper-activation of mitogen-activated protein kinase (MAPK) is another regulatory mechanism for induction of ER negativity in endocrine resistant cells. Some other factors that resulting in the loss of ER expression include mutations and abnormal splicing. It remains unclear whether ER mutations are of any clinical significance or not (Wang et al., 2019). However, resistance to antiestrogen therapy can even be developed in the absence of any apparent gene mutation. The splicing variants are yet to be determined for their relevance to chemotherapeutic resistance (Viedma-Rodríguez et al., 2014). Pharmacological and Metabolic Aspects Marked reduction in the intracellular drug concentration is another potential resistance mechanism. This is partly due to the increased efflux or decreased influx; the former being linked to the overexpression of P glycoprotein (Pgp). Pgp is a 170 kDa ATP-driven membrane pump that expels the cytotoxic drug using the energy released by ATP hydrolysis. It is still vague that to what extent this resistance mechanism works to saturate ER and lower TAM efficacy. The increase in the metabolic potential of tamoxifen to estrogenic metabolites further contributes to resistance development in patients diagnosed with breast cancer. Emerging evidence shows that N-desmethyltamoxifen is the major antiestrogenic metabolite present in the serum. Conversely, the levels of 4-hydroxytamoxifen (4-OH TAM) in serum are low yet the binding affinity of 4-OH TAM is much greater than that of tamoxifen. The generation of this hydroxylated product is CYP2D6 dependent. Patients having a wild type allele of CYP2D6 were co-administered with paroxetine and tamoxifen, and they exhibited decreased plasma levels of 4-OH TAM. Alongside, women who underwent TAM therapy and carried a variant allele of CYP2D6 also had lower concentrations of this TAM metabolite. The results of this study illustrated that the pharmacological interactions as well as drug pharmacogenomics serve to decrease TAM efficiency in breast cancer cells. Single nucleotide polymorphism (SNP) in CYP2D6 is another factor responsible for the null or minimal tamoxifen response to cancer cells (Viedma-Rodríguez et al., 2014). This initiates competition between estrogenic and antiestrogenic metabolites for ER activation. As per previous studies, a huge amount of estrogen will be needed to diminish the anti-estrogenic effects of tamoxifen. Furthermore, it has been observed that the concentration of TAM metabolites in the serum remains constant for several years following therapy. Altered Patterns of Co-Regulators Co-activators and co-repressors have dominating roles in context of ER. It has been predicted that aberrant expression of co-regulators can potentially lead to a tamoxifen resistant phenotype. Co-repressors Upon recruitment of co-repressor proteins on ER, multi-subunit repressor complexes are formed which involve HDACs for the condensation of chromatin and for transcription repression. Notably, co-repressors such as NCoR1and NCoR2 are conditionally recruited, provided that an antagonist (tamoxifen) has pre-formed a complex with ER. Consequently, the agonist activity will be inhibited. This implies that the minimal effects of TAM therapy may in turn be due to the progressive decrease in the co-repressor activity. Low protein levels of NCoR point towards poor prognostic value due to which the cells are subjected to acquire resistance in both in vitro and in vivo models (Osborne & Schiff, 2003). Deregulation in Cellular Kinases or Signal Transduction Pathways The mechanistic behavior of estrogen receptor cannot be studied in isolation from other signal cascade pathways. It is critical to study the dynamic regulatory interactions concerning ER, growth factors and signal transduction pathways for a better understanding (Osborne & Schiff, 2003). Growth factor signaling It has been evidenced that there occurs a cross talk between estrogen receptor and growth factor receptor pathways; most importantly those of epidermal growth factor receptor -EGFR/HER2 and insulin-like growth factor receptor (IGFR) families (Gururaj et al., 2006).ER has two activation function (AF) domains, AF-1 and AF-2 (Puranik et al., 2019). ERK1 and ERK2 of the MAPK family modulate the expression regulation by phosphorylating ER at serine 118 position within AF-1 domain. This increases the likelihood of ER for ligand and activates ER directly -independent of ligand (Okat, 2018).Besides, serine 167 is also phosphorylated in AF-1 domain of ER through the action of ribosomal S6 kinase (RS6) which is itself activated by ERK1 and ERK2. This means that increase in the expression of ERK1/2 can possibly confer resistance to anti-estrogen regimen (Shou et al., 2004). Additionally, growth factor signaling does have an indirect impact on ER activation by stimulating the co-activator response and impairing the co-repressor response (Nicholson et al., 2007). Such responses are believed to be achieved by phosphorylating transcriptional co-regulators which influences their nuclear sub-localization (Awan & Esfahani, 2018). It is noteworthy that up-regulation in peptide growth factor signaling pathway during endocrine therapy is a clear indication of Tam Ligand-receptor complex physically and directly relates to the activation of IGFR and is associated with downstream ERK1/2 MAPK signaling cascade (Ali et al., 2016).Membrane bound ER also has a direct physical association with HER2 and the transactivation of EGFR by phosphorylation (Davoli et al., 2010). Higher expression of EGFR/HER2 in MCF7 cell line enables cell proliferation and inhibits cellular apoptosis. This is another approach by which cells acquire resistance to tamoxifen (Fan et al., 2015). Role of Oxidative Stress Mediated Pathway The interaction of ER with stress-activated protein kinase/c-junNH2 terminal kinase pathway is important to comprehend resistance in a stress-induced environment. AP-1 is a transcriptional complex of Fos and Jun whose binding to DNA at AP-1 response element is initiated by dimerization. AP-1 transcription is enhanced in response to phosphorylation of these components by junNH2 terminal kinases (JNKs) or stress activated protein kinases (SAPKs). Administration of tamoxifen for a longer time may create intracellular oxidative stress. Under such conditions, these enzymes are activated. Cells that manifest a rise in AP-1 activity are thought to have developed TAM resistance. Elevation in the oxidative stress may result in a more aggressive resistant phenotype. P38 MAPK regulation P38 MAPK activation has also been reported in case of extracellular stimuli detection such as chemical or physical stress, cytokines, etc. This pathway is switched on in the cell lines that express ER and 4-OH TAM eventually causes apoptosis. Tamoxifen-induced stress negatively affects the signaling of p38 MAPK. Due to the inhibition of this pathway, 4-OH TAM fails to induce apoptosis and the cells tend to resist anti-cancer therapy. Peroxiredoxins (prxns) are the anti-oxidant modulators for intracellular redox cycling. They are known for protecting cells from oxidative damage by the modulation of apoptosis. Prx5 is found in various locations where reactive oxygen species (ROSs) may generate such as in mitochondria, cytosol, peroxisomes, etc. The expression of prx5 is due to ROS. Higher expression of prx5 has been reported in breast cancer tissues. Thus, this anti-oxidant has role in mammary tumorigenesis. GATA-1 has shown to repress the transcription of prx5 and this correlates with agonist bound ER activity. This makes Tam inefficient as it loses its ability to prevent the repression of anti-oxidant, prx5 protein. Cells will continue to grow despite TAM treatment because of apoptotic suppression. Inhibition of Autophagy and Apoptosis Autophagy refers to self-eating. It is a mechanism for the digestion of cytoplasmic contents including misfolded or unfolded proteins, sub-cellular organelles by the formation of autophagosomes. It is a way to restore homeostasis under stress conditions. Clinical research has figured out the role of autophagy and apoptosis in resistant breast cancer. Overexpression of beclin-1, an important biomarker for autophagy, tends to reduce tamoxifen sensitivity to cancer cells. Role of Cancer Stem Cells (CSCs) The cells having stem-cell like properties constitute a sub-population that can sustain tumorigenesis are termed as cancer stem cells (CSCs) ( Conclusion Accumulating data on different mechanisms involved in tamoxifen resistance suggest that they can be targeted to eradicate drug resistance and improve therapeutic interventions. However, the present knowledge is limited to research mostly on in vitro models, therefore, further research with large animal models and clinical samples can provide concrete basis for the control of chemotherapeutic drug resistance. Acknowledgments The author acknowledges all the scientists working and tried to elucidate the different mechanisms involved in the complex field of chemotherapeutic drug resistance.
v3-fos-license
2022-12-14T16:01:53.608Z
2022-12-01T00:00:00.000
254611571
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6643/14/24/5289/pdf?version=1670837785", "pdf_hash": "b861181789c2f98aca187a5bcbe79223ade20e2a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1262", "s2fieldsofstudy": [ "Biology" ], "sha1": "b0e777780d5cdf0f1e393e70e1781cda2a95c6d7", "year": 2022 }
pes2o/s2orc
Bioactive Oligopeptides from Ginseng (Panax ginseng Meyer) Suppress Oxidative Stress-Induced Senescence in Fibroblasts via NAD+/SIRT1/PGC-1α Signaling Pathway The physicochemical properties and multiple bioactive effects of ginseng oligopeptides (GOPs), plant-derived small molecule bioactive peptides, suggest a positive influence on health span and longevity. Given this, cellular senescence is the initiating factor and key mechanism of aging in the organism, and thus the current study sought to explore the effects of GOPs on H2O2-induced cellular senescence and its potential mechanisms. Senescence was induced in mouse embryonic fibroblasts NIH/3T3 by 4 h of exposure to 200 µM H2O2 and confirmed using CCK-8 assay and Western blot analyses of p16INK4A and p21Waf1/Cip1 after 24 h of growth medium administration with or without GOPs supplementation (25, 50, and 100 µg/mL). We found that GOPs delayed oxidative stress-induced NIH/3T3 senescence by inhibiting the G1 phase arrest, increasing DNA synthesis in the S phase, decreasing the relative protein expression of p16INK4A and p21Waf1/Cip1, promoting cell viability, protecting DNA, and enhancing telomerase (TE) activity. Further investigation revealed that the increase in antioxidative capacity and anti-inflammation capacity might form the basis for the retarding of the senescence effects of GOPs. Furthermore, GOPs supplementation significantly improved mitochondrial function and mitochondrial biogenesis via the NAD+/SIRT1/PGC-1𝛼 pathway. These findings indicate that GOPs may have a positive effect on health span and lifespan extension via combating cellular senescence, oxidative stress, and inflammation, as well as modulating longevity regulating pathway NAD+/SIRT1/PGC-1𝛼. Introduction Aging is characterized as a time-dependent functional impairment that contributes to increased vulnerability to multiple human pathologies and death [1]. Organisms adapt and respond to the surrounding nutrient sources, and their various life activities are regulated by a network of nutrients and nutrient-sensing pathways [2]. The latest evidence suggests that the restriction of dietary energy, protein, or amino acids, can expand lifespan, improve metabolic disorders, and reduce the risk of aging-related diseases [3]. However, an observational study has shown that a high-protein diet is associated with a substantial reduction in cancer prevalence and all-cause mortality in populations over 65 years old [4]. Physiological changes occur with aging, such as decreased appetite, sensory loss, dysphagia, masticatory dysfunction, and gastrointestinal disorder, resulting in decreased food and energy intake. Although energy demands decrease with age due to a lower basal metabolic rate, the need for protein increases to make up for the age-related loss of skeletal muscle mass and function [5]. Dietary protein is a major source of amino acids. Reduced protein synthesis and catabolism of other amino acids result from an inadequate supply of amino The H 2 O 2 -Induced NIH/3T3 senescent model was established in a manner previously described. NIH/3T3 was incubated for 4 h in a growth medium supplemented with different concentrations of H 2 O 2 (from 50 to 800 µM) and then cultured in a growth medium for 24 h. The cell viability and the relative protein expression of p16 INK4A and p21 Waf1/Cip1 were estimated to screen the effective intervention dose of H 2 O 2 . The present research established a total of 5 groups: the control group and model group; and the low-, middle-, and high-dose GOPs groups. The control group was seeded in a growth medium. The model group was cultured in a growth medium with 200 µM of H 2 O 2 for 4 h and then incubated in a growth medium without H 2 O 2 for 24 h. The three GOPs administration groups were cultured in a growth medium supplemented with 200 µM of H 2 O 2 and GOPs (25,50, and 100 µg/mL, respectively), and the medium was removed after 4 h and then cultured in a H 2 O 2 free growth medium containing 25/50/100 µg/mL of GOPs. The cells were collected for further investigation after being exposed to GOPs. Cell Viability Assay Cell viability was assayed using the cell-counting kit-8 (CCK-8) assay (KeyGEN, Nanjing, China) according to the manufacturer's protocol. Briefly, about 1 × 10 4 cells/well were added to 96-well plates. After treatment according to the protocol, 10 µL of CCK-8 was added to each well and cultured at 37 • C for 1-4 h. The absorbance ratio of each well was measured at 450 nm with a microplate reader (BMG FLUOstar Omega, Germany). Flow Cytometry Cells at 2 mL/well (about 2 × 10 5 ) were seeded in 6-well plates and treated according to the protocol. For the cell cycle analysis, cells were harvested and washed twice with phosphate-buffered saline (PBS) and then fixed with 75% ethanol overnight at 4 • C. Following this, cells were washed three times with PBS and incubated with propidium staining and RNase A (Beyotime, Shanghai, China) for 30 min at 37 • C and analyzed using a Flow Cytometer (Beckman Coulter, Brea, CA, USA). For intracellular reactive oxygen species (ROS) analysis, cells were harvested and washed once with PBS, incubated for 20 min at 37 • C with the 10 µM of 2,7-dichlorofluorescein diacetate (Beyotime, Shanghai, China). After being washed with PBS three times, the cells were analyzed using a Flow Cytometer (Beckman Coulter, Brea, CA, USA). For mitochondrial membrane potential (∆Ψm) analysis, cells were harvested and stained with 500 µL of 1 × JC-1 dye solution (Beyotime, Shanghai, China) at 37 • C for 20 min in the dark. Then, the cells were washed twice and resuspended Western Blot Analysis Cells at 2 mL/well (about 2 × 10 5 ) were seeded in 6-well plates and treated according to the protocol. Cells were collected and washed twice with PBS and then resuspended in RIPA Lysis Buffer (Biosharp, HeFei, China) supplemented with 1 mM of phenylmethanesulfonyl fluoride. Protein was extracted by centrifugation at 14,000× g for 15 min at 4 • C, and the concentration of protein was measured with a BCA protein assay kit (Thermo Scientific, Waltham, MA, USA). Equal amounts of protein (80-150 µg) were separated by 10-20% SDS-PAGE gel and transferred to PVDF membranes at different electric currents according to the size of protein molecules. The membranes were blocked for 2 h in 5% nonfat milk dissolved with Tris-buffered saline containing 0.05% Tween-20 (TBST) at room temperature. Statistical Analysis Statistical analyses were performed using the SPSS software version 24 (SPSS Inc., Chicago, IL, USA). Data are expressed as mean ± standard deviation (SD) and analyzed by one-way analysis of variance (ANOVA) test; to analyze the difference of parametric samples among groups, multiple comparisons of least significant difference (equal variances assumed) or Dunnett's T3 test (equal variances not assumed) was used. p < 0.05 indicated a statistically significant difference. Effect of GOPs on Hallmarks of NIH/3T3 Senescence Cell cycle analyses indicated that aggravated oxidative stress induced partial G1 arrest in NIH/3T3 as evidenced by a higher percentage of cells in the G1 phase along with the concurrent decreases of the S fractions (p < 0.05). Compared with the model group, GOPs administration significantly inhibited oxidative stress-induced cell cycle arrest in three GOPs treated groups as reflected by a lower percentage of cells in the G1 phase and a higher percentage of cells in the S phase (p < 0.05) ( Figure 1A). We further analyzed the relative protein expression level of p16 INK4A and p21 Waf1/Cip1 , and they were obviously upmodulated in the model group (p < 0.05). Meanwhile, this phenomenon was suppressed with the supplementation of GOPs with respect to the model group (p < 0.05) ( Figure 1B). CCK-8 cell viability assay was applied to measure cell viability, and decreased cell viability was observed in four H2O2-treated groups (p < 0.05). Compared with the model group, the cell viability was both significantly upmodulated in the 50 and 100 µg/mL of GOPs administration groups (p < 0.05) ( Figure 1C). γ-H2A.X was used as a reference marker for DNA damage. Enhanced DNA damage as a consequence of the exogenous administration of H2O2 was observed in the model group compared with the control group (p > 0.05). Compared with the control group, the relative protein expression of γ-H2A.X was significantly decreased in the 50 µg/mL of GOPs supplementation group (p < 0.05) ( Figure 1D). Regarding TE activity, no obvious changes were found between the control and model groups (p > 0.05). However, the TE activity was significantly enhanced in the 50 and 100 µg/mL of GOPs supplementation groups compared with the control group and the model group (p < 0.05) ( Figure 1E). Effect of GOPs on Oxidative Stress Status The evaluation of oxidative stress status was completed with assessment of intracellular ROS, GSH-Px, SOD, and MDA. Compared with the control group, the intracellular ROS level in the model group was significantly increased (p < 0.05). Compared with the model group, the ROS generation tended to decrease in the 25 and 50 µg/mL of GOPs CCK-8 cell viability assay was applied to measure cell viability, and decreased cell viability was observed in four H 2 O 2 -treated groups (p < 0.05). Compared with the model group, the cell viability was both significantly upmodulated in the 50 and 100 µg/mL of GOPs administration groups (p < 0.05) ( Figure 1C). γ-H2A.X was used as a reference marker for DNA damage. Enhanced DNA damage as a consequence of the exogenous administration of H 2 O 2 was observed in the model group compared with the control group (p > 0.05). Compared with the control group, the relative protein expression of γ-H2A.X was significantly decreased in the 50 µg/mL of GOPs supplementation group (p < 0.05) ( Figure 1D). Regarding TE activity, no obvious changes were found between the control and model groups (p > 0.05). However, the TE activity was significantly enhanced in the 50 and 100 µg/mL of GOPs supplementation groups compared with the control group and the model group (p < 0.05) ( Figure 1E). Effect of GOPs on Oxidative Stress Status The evaluation of oxidative stress status was completed with assessment of intracellular ROS, GSH-Px, SOD, and MDA. Compared with the control group, the intracellular ROS level in the model group was significantly increased (p < 0.05). Compared with the model group, the ROS generation tended to decrease in the 25 and 50 µg/mL of GOPs The antioxidant enzyme activities were inhibited in oxidative stress-induced senescent cells. Compared with the control group, GSH-Px and SOD activity was dramatically decreased in the model group (p < 0.05). Compared with the model group, the 50 µg/mL of GOPs supplementation group significantly enhanced GSH-Px activity (p < 0.05), while three concentrations of GOPs groups all markedly boosted SOD activity (p < 0.05) ( Figure 2C,D). MDA is the main metabolite of lipid peroxidation. Further analysis showed that the MDA concentration in the model group was significantly increased with respect to the control group (p < 0.05). Compared with the model group, the 25 and 50 µg/mL of GOPs supplementation groups tended to decrease MDA production to the basal level (p > 0.05) ( Figure 2E). Effect of GOPs on Senescence-Associated Secretory Phenotype (SASP) The production of IL-6, IL-1, MMP-3, ICAM-1, and VCAM-1 was tested to determine The antioxidant enzyme activities were inhibited in oxidative stress-induced senescent cells. Compared with the control group, GSH-Px and SOD activity was dramatically decreased in the model group (p < 0.05). Compared with the model group, the 50 µg/mL of GOPs supplementation group significantly enhanced GSH-Px activity (p < 0.05), while three concentrations of GOPs groups all markedly boosted SOD activity (p < 0.05) ( Figure 2C,D). MDA is the main metabolite of lipid peroxidation. Further analysis showed that the MDA concentration in the model group was significantly increased with respect to the control group (p < 0.05). Compared with the model group, the 25 and 50 µg/mL of GOPs supplementation groups tended to decrease MDA production to the basal level (p > 0.05) ( Figure 2E). Effect of GOPs on Senescence-Associated Secretory Phenotype (SASP) The production of IL-6, IL-1, MMP-3, ICAM-1, and VCAM-1 was tested to determine the impact of GOPs on the senescent NIH/3T3 inflammatory phenotype ( Figure 3A-E). Compared with the control group, the concentrations of IL-6, IL-1β, MMP-3, ICAM-1, and VCAM-1 were significantly higher in the model group (p < 0.05). Compared with the model group, GOPs supplementation groups tended to decrease IL-1β secretion to the normal level (p > 0.05), while 25 and 50 µg/mL of GOPs supplementation groups dramatically reduced IL-6 secretion (p < 0.05). GOPs administration groups also significantly decreased ICAM-1 concentration (p < 0.05), and 50 µg/mL of GOPs supplementation group significantly inhibited MMP-3 secretion (p < 0.05). One unanticipated finding was that ICAM-1 level significantly increased in the three GOPs administration groups compared with the control group and model group (p < 0.05). 14, x FOR PEER REVIEW 7 of 14 significantly inhibited MMP-3 secretion (p < 0.05). One unanticipated finding was that ICAM-1 level significantly increased in the three GOPs administration groups compared with the control group and model group (p < 0.05). To explore the anti-inflammatory mechanism of GOPs, we examined the effect of GOPs on the markers of the NF-κB pathway. NF-κB is an important mediator for cellular responses to inflammatory stimuli. Compared with the control group, H2O2 intervention tended to enhance the activation of NF-κB (p-NF-κB/NF-κB) in the model group (p > 0.05). Compared with the model group, the activation of NF-κB strongly attenuated in the 50 µg/mL of GOPs supplementation group (p < 0.05) ( Figure 3F). Effect of GOPs on Mitochondrial Function and Biogenesis Loss of mitochondrial membrane potential and impairment of mitochondrial bioen- To explore the anti-inflammatory mechanism of GOPs, we examined the effect of GOPs on the markers of the NF-κB pathway. NF-κB is an important mediator for cellular responses to inflammatory stimuli. Compared with the control group, H 2 O 2 intervention tended to enhance the activation of NF-κB (p-NF-κB/NF-κB) in the model group (p > 0.05). Compared with the model group, the activation of NF-κB strongly attenuated in the 50 µg/mL of GOPs supplementation group (p < 0.05) ( Figure 3F). Effect of GOPs on Mitochondrial Function and Biogenesis Loss of mitochondrial membrane potential and impairment of mitochondrial bioenergetics was detected in senescent NIH/3T3 cells. Compared with the control group, mitochondrial membrane potential declined considerably in the model group (p < 0.05). Compared with the model group, mitochondrial membrane potential was significantly enhanced in the 25 and 50 µg/mL of GOPs supplementation groups (p < 0.05) ( Figure 4A). We further analyzed the effect of GOPs on mitochondrial biogenesis signaling NAD + /SIRT1/PGC-1α. Compared with the control group, NAD + concentration and NAD + /NADH both significantly decreased in the model group (p < 0.05). Compared with the model group, NAD + levels and NAD + /NADH were all significantly upmodulated in the different concentrations of GOPs administration groups (p < 0.05) ( Figure 4B,C). Compared with the control group, the relative protein expression of SIRT1 tended to increase in the model group (p > 0.05). Moreover, the relative protein expression of SIRT1 was greatly decreased in the 50 µg/mL of GOPs supplementation group compared with the control group and the model group (p < 0.05) ( Figure 4D). Compared with the control group, the relative protein expression of PGC-1α was significantly downregulated in the model group (p < 0.05). Compared with the model group, the relative protein expression of PGC-1α was significantly increased in the 50 µg/mL of GOPs treated group (p < 0.05) ( Figure 4E). 2, 14, x FOR PEER REVIEW 8 of 14 NAD + /SIRT1/PGC-1 . Compared with the control group, NAD + concentration and NAD + /NADH both significantly decreased in the model group (p < 0.05). Compared with the model group, NAD + levels and NAD + /NADH were all significantly upmodulated in the different concentrations of GOPs administration groups (p < 0.05) ( Figure 4B,C). Compared with the control group, the relative protein expression of SIRT1 tended to increase in the model group (p > 0.05). Moreover, the relative protein expression of SIRT1 was greatly decreased in the 50 µg/mL of GOPs supplementation group compared with the control group and the model group (p < 0.05) ( Figure 4D). Compared with the control group, the relative protein expression of PGC-1 was significantly downregulated in the model group (p < 0.05). Compared with the model group, the relative protein expression of PGC-1 was significantly increased in the 50 µg/mL of GOPs treated group (p < 0.05) ( Figure 4E). Discussion There is a growing body of research demonstrating the effectiveness of bioactive peptides in extending lifespan and delaying aging. GOPs are plant-derived protein hydroly- Discussion There is a growing body of research demonstrating the effectiveness of bioactive peptides in extending lifespan and delaying aging. GOPs are plant-derived protein hydrolysates with a distinct amino acid pattern and lower rates of methionine and branchedchain amino acids, which are linked to a shortened lifespan. Several scientific studies in our laboratory suggest that GOPs may have a positive influence on prolonging a healthy lifespan. We attempted to probe the implication of GOPs on the aging process and their potential mechanisms from the cellular level through a series of experiments. Therefore, in the present work, a biological senescence model was established by culturing NIH/3T3 in vitro, and then the effect of GOPs on cell senescence was investigated. Many researchers have utilized mouse fibroblast NIH/3T3 cells to measure cellular aging and identify aging-related changes in the organism [21][22][23]. One significant mechanism of aging is the accumulation of senescent cells, which can be characterized as a steady arrest of the cell cycle caused by telomere shortening [24]. There are other aging-associated incitants, such as oxidative stress, ionizing radiation, and nutritional imbalance, trigger senescence independently of the telomeric process, which is known as premature senescence [25]. Among the contributing factors, ROS-induced oxidative stress is one of the most important [4]. Thus, to replicate naturally senescent cells, the oxidative-induced premature senescence model is frequently employed in scientific research [26][27][28], including the current study. Multiple indicators were employed to assess cell senescence in the current study. We found that 4 h of 200 µM H 2 O 2 treatment to NIH/3T3 successfully blocked the cell cycle in the G1 phase and decreased the proportion of cells in the S phase in contrast to the control group. Cyclin-dependent kinase (CDK) and cyclin-dependent kinase inhibitor (CDKI) control cell cycle progression. The main driving factor of cell cycle arrest during aging is the CDKI encoded in the CDKN2A (p16 INK4A ) and CDKN1A (p21 Waf1/Cip1 ) loci. We further found that the protein expression of both was significantly upmodulated in oxidative stress-induced senescent NIH/3T3. Synchronously, senescent NIH/3T3 exhibited an accelerated loss in cell viability and proliferation. Irreparable DNA damage can induce senescence [29], and γ-H2A.X is considered a biomarker of DNA damage [30]. It was found that 200 µM of H 2 O 2 treatment tended to accelerate DNA damage. However, no significant reduction in the TE activity, which is needed to replicate completely the terminal ends of linear DNA molecules [31], was found in the comparison with normal NIH/3T3. We report first time that GOPs postponed oxidative stress-induced senescence of NIH/3T3. GOPs supplementation significantly inhibited cell cycle arrest and promoted DNA synthesis in the S phase. Further research revealed that one essential mechanism for this phenomenon may be correlated with the inhibition of p16 INK4A and p21 Waf1/Cip1 expression in NIH/3T3. Bioactive peptides are known to enhance cell proliferation [32,33]. Consistent with the literature, this study found that GOPs promote cell viability in senescent NIH/3T3. This also accords with our earlier observations, in which GOPs supplementation greatly increased mouse spleen lymphocyte proliferation [17]. In addition, GOPs can protect DNA against oxidative stress in NIH/3T3. Given the one common contributor of aging is the overabundance of genetic damage throughout life [34], stabilization of genomic homeostasis by GOPs is greatly important for life extension. Surprisingly, GOPs significantly increased TE activity in NIH/3T3, indicating the capacity to counter telomere shortening. Taken together, these findings strengthen the hypothesis that GOPs may have a positive influence on prolonging healthy lifespan by delaying cellular senescence. Indeed, several researchers have used cellular and animal models, as well as human clinical trials, to demonstrate that bioactive peptides isolated from marine food, such as sea cucumber, sepia esculenta, herring milt, have antiaging properties [35]. There is compelling evidence that oxidative stress contributes to a variety of aging pathologies [36], and one essential strategy for combating the aging process is mitigating the levels of oxidative damage. We showed that GOPs possess a moderate radical scavenging ability under a lower dosage. However, the observed difference between the model and the GOPs group was not obvious in this study. This result may be explained by the fact that the level of exogenously supplied and endogenously generated ROS is overwhelming and beyond the capacity of natural antioxidant GOPs. Furthermore, in several studies, the peptides isolated from egg, milk, and plant have been identified to have free radical scavenging abilities [37]. Further analysis showed that GOPs strongly enhanced the antioxidant enzyme system and tended to reduce MDA production during the senescent process in NIH/3T3. This also accords with our earlier observations, which showed that GOPs significantly increased antioxidant enzyme SOD and GSH-px activities and decreased MDA content in the liver, pancreas, and muscle of mice under various pathophysiological conditions [15,16,18]. In addition, bioactive peptides extracted from different animal and plant proteins were proven to have antioxidative activity [38]. Overall, bioactive peptides work through a variety of mechanisms to exert their antioxidant properties, but it is mainly completed through delivering hydrogen atoms or electrons to engage in free radical scavenging processes and suppressing free radical production by chelating metal ions [39]. Second, antioxidant enzymes are critical targets for peptide activity in cells and organisms [35]. As reported previously, smaller molecular weight peptides are more likely to approach the free radical reaction center to complete the oxidation reaction chain, and short peptides with fewer than 8 amino acids exhibit substantial antioxidant properties [40]. GOPs are oligopeptides with molecular weights of less than 1000 Da, which serve as the structural foundation for their antioxidant mechanisms. In contrast, peptides with larger proportions of polar amino acids typically exhibit stronger antioxidant activity since the chelation of side chains suppress free radical oxidation [39]. The polar amino acid level of the GOPs was approximately 71.90%, further confirming the strong antioxidant activity of GOPs. In addition to cell cycle arrest, the induction of a SASP is also one major characteristic of cellular senescence [41]. SASP factors, such as IL-1 and IL-6, trigger paracrine senescence in neighboring areas to reinforce senescence in tissues [42]. We showed that GOPs supplementation significantly inhibited the secretion of IL-6, MMP-3, ICAM-1, and VCAM-1 in NIH/3T3. This could be one of the crucial mechanisms for GOPs to combat cellular senescence. The proinflammatory transcription factor NF-κB is largely responsible for the SASP, which is a transcriptional program to a large extent [43]. We confirmed that the GOPs administration suppressed NF-κB activation. This may, at least in part, account for the mechanism of GOPs exerting their anti-inflammatory effects. Similar observations were made in our previous in vivo experiments, which showed that GOPs inhibited the secretion of cytokines such as IL-1, IL-6, and TNF-α by downregulating NF-κB in the mouse models of varied degrees of inflammation [15,18]. Furthermore, several reports have shown that bioactive peptides isolated from soybean, sea cucumber, and fish possess prominent anti-inflammatory potential [44][45][46]. In accordance with the assumption that genomic instability is a fundamental driver of the SASP, the key catalyst for NF-kB stimulation is DNA damage response [47,48]. We speculate that the anti-inflammatory potential of GOPs may be partly attributable to its DNA protective effect as mentioned above. Meanwhile, there exists a feedback loop in which SASP expression activates ROS production and DNA damage response [49]. Our findings suggest that GOPs regulate all links of this feedback loop, thus combating senescence progression from multiple dimensions. Previous studies in our laboratory have shown that GOPs can improve mitochondrial function of skeletal muscles in mice by increasing the mitochondrial DNA content and promoting the mRNA expression of NRF-1 and TFAM [16]. Mitochondria are the essential component in the control of aging. The mitochondrial theory of aging hypothesizes that aging-related sustained mitochondrial dysfunction leads to increased generation of ROS, which in turn aggravates mitochondrial degradation and cellular damage [48]. Thus, we focused on how GOPs affected mitochondrial activity. The results of this study showed that GOPs supplementation improved mitochondrial function as measured by the increased mitochondrial membrane potential in NIH/3T3. In accordance with the present results, numerous investigations have indicated that bioactive peptides derived from marine food can repair or potentiate mitochondrial function after exposure to external stimuli such as H 2 O 2 and UV radiation [50][51][52]. Aging-associated mitochondrial dysfunction is normally accompanied by impaired turnover in mitochondria owing to decreased biogenesis and clearance [48]. NAD + /SIRT1/PGC-1α signaling pathway, which is involved in mitochondrial biogenesis, is a classical longevity-regulating pathway [53]. We found that GOPs supplementation strongly enhanced NAD + and NAD + /NADH activity while upmodulating the protein expression of PGC-1α in NIH/3T3. Meanwhile, the relative protein expression of SIRT1 was decreased in the model group compared with the GOPs-treated group. Given that SIRT1 needs NAD + to function, this result may be explained by the observation that NAD + depletion decreased SIRT1 consumption in the model group and resulted in a relatively higher level of SIRT1 expression. Lin and Feng found that bioactive peptides isolated from cereals such as corn and potato can promote mitochondrial biogenesis through upmodulating PGC-1α expression [54,55]. Bioactive peptides can be used as a protein substitute in the daily diet while ensuring amino acid requirements are met. Several studies involving amino acid metabolism during aging also confirmed the correlation between amino acid supplementation and mitochondrial biogenesis. Romano and colleagues reported that the essential amino acid supplementation in mice increases their life span while also promoting mitochondrial biogenesis at the molecular level [6]. Several clinical studies have also shown that branched-chain amino acid supplementation may lessen sarcopenia in the elderly by boosting mitochondrial biosynthesis [2]. In addition, researchers also have found that other biologically active constituents of ginseng, such as ginsenoside, can stabilize mitochondrial membrane potential, increase intracellular ATP production, and enhance mitochondrial biogenesis by activating NRF-1, TFAM, and PGC-1α [56,57]. Overall, the main finding of this study is that GOPs delayed oxidative stress-induced NIH/3T3 senescence via antioxidant and anti-inflammatory activities and the promotion of mitochondrial biogenesis. These results provide further support for the hypothesis that GOPs may have a positive influence on prolonging lifespan and health span via combating cellular senescence, oxidative stress, and inflammation and protecting mitochondria. Compared to in vitro models, lifespan experiments on mice are more expensive and time-consuming. The current investigation offered a scientific basis for the benefit of long periods of in vivo studies in the future. The results also provide a basis for continued studies to better understand the bioactivity of GOPs. Moreover, evidence now suggests a correlation between oxidative stress and inflammation and chronic diseases like diabetes, hypertension, and atherosclerosis. As a consequence, anti-inflammatory and antioxidant therapies have emerged as potential new approaches to monitor, prevent, and treat chronic diseases [58]. GOPs may have promising applications in this field. A limitation of this study is that oxidative stress-induced premature senescence cannot fully simulate the natural cellular senescence process. Subsequent primary cell cultures combined with in vivo experiments will offer more conclusive proof. Further research is required to establish the precise exposure dose of GOPs to the cell and to fully understand the effects of GOPs on cell metabolism and the related physiological functions. More efforts will be required to clarify the role of GOPs in prolonging lifespan and health span. Conclusions We have demonstrated that GOPs delayed oxidative stress-induced senescence of NIH/3T3 through the inhibition of cell cycle arrest, promotion of DNA synthesis in the S phase, downregulation of p16 INK4A and p21 Waf1/Cip1 expression, stimulation of cell proliferation, protection of DNA, and promotion of TE activity. Further investigation revealed that the underlying mechanisms of GOPs-delayed senescence were associated with its antioxidant activity, anti-inflammatory effect via downregulation of NF-κB, and promotion of mitochondrial biogenesis via NAD + /SIRT1/PGC-1α pathway. These findings lend support to the hypothesis that GOPs may have a positive influence on extending lifespan and health span via combating cellular senescence, oxidative stress, and inflammation, and protecting mitochondria. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.
v3-fos-license
2014-10-01T00:00:00.000Z
1988-06-01T00:00:00.000
131071
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "pd", "oa_status": "GOLD", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1474604/pdf/envhper00429-0174.pdf", "pdf_hash": "c19a855337f6803f7b8baabd910bde1b201ea389", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1263", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "c19a855337f6803f7b8baabd910bde1b201ea389", "year": 1988 }
pes2o/s2orc
Chromatographic modeling of the release of particle-adsorbed molecules into synthetic alveolar surfactant. Pseudophase liquid chromatography was used to measure the thermodynamic parameters governing adsorption of organic molecules from the surfaces of carbonaceous particles into liposomal zwitterionic mobile phases. These mobile phases contain many of the important physicochemical parameters of alveolar surfactant. Results show that physical desorption into model surfactant will be dependent upon the heat of solution and the heat of adsorption. Dominance of either thermodynamic parameter is dependent upon the relative polarity of the adsorbent surface and the adsorbate molecule. It is postulated from data obtained from simple molecules containing relevant organic functional groups that physical desorption of environmental agents from the surfaces of particulate complexes into alveolar surfactant may be predicted both by quantification of the polarity of the system and of the extent of surface coverage under investigation. Introduction Respiration and alveolar deposition of carbonaceous particle-environmental agent complexes present the scenario for desorption of the surface-adsorbed molecules into regions of the lung not normally exposed to organic toxicants. The change in free energy when a particle-bound adsorbate is released into the surfactant-rich fluid found in the alveolar region of the lung will determine equilibrium concentrations of the adsorbate in the adsorbed and solution phases. If it is assumed that entropy contributions are small, enthalpy and bioavailability may be related in a dosedependent manner. Chromatography is a useful tool to model the dynamic equilibrium of desorption of complexes typically formed by combustion processes. This paper is the third in a series that describes research efforts to build a thermodynamic model to predict the bioavailability of adsorbed molecules on the surface of respirable particles. Prior research from this laboratory has investigated the release by high-performance liquid chromatography (HPLC), in which the adsorbent is the solid support, the adsorbate the solute, and the mobile phase pure solvents of a range of polarities (1). We have also quantified the enthalpies of the gas-phase adsorption of these same solutes onto the same carbon black adsorbents, which vary in degree of *Division of Environmental Chemistry, Department of Environmental Health Sciences, The John Hopkins University School of Hygiene and Public Health, Baltimore, MD 21205. oxidation and surface area (2). In a related investigation, we have successfully employed opaque micellar mobile phases in HPLC with pellicular C18 reversephase column packing materials (3). The results of these interrelated studies have allowed us to fully characterize the surface properties of these heterogeneous carbon black adsorbents and showed that carbon blacks behave as normal-phase chromatographic columns with nonpolar mobile phases but act as reverse-phase packings when polar mobile phases are used. The synthetic alveolar surfactant used as a mobile phase in this investigation contains many of the relevant physicochemical properties of actual alveolar surfactant, since it is composed primarily of the zwitterionic phospholipid, dipalmitoyl phosphatidylcholine, which forms complex bilayer liposomes in aqueous solution. The chemical nature of the liposomal synthethic lung fluid indicates that it is likely to contain both polar and nonpolar regions and, therefore, desorption is postulated to be dependent upon the region of the liposome that contacts the particle-adsorbate complex. The primary physiological function of alveolar surfactant is to maintain the surface tension of the alveolus. Surfactant may be viewed conceptually as an epithelial coating, expanding and contracting with each respiration, washing over respired particles that become embedded in the alveolar epithelium. Molecules that desorb into the surfactant may, therefore, have secondary contact with resident particles so that a toxic molecule may remain resident in the lung for a longer time than it would if it were diluted into alveolar surfactant and cleared via normal metabolic processes. The metabolic and cellular components of lung defense and clearance are not addressed in this study, and desorption is assumed to occur solely through physical processes. The mobile phases used in this study model human alveolar surfactant. Synthetic surfactant was used because it is impossible to obtain sufficient quantities of actual surfactant from animals by lung lavage. HPLC with liposomal mobile phases permits physical release to be modeled dynamically, although the mobile phase does present chromatographic difficulties that must be overcome in order to use ultraviolet detection. The method we have employed ensured that the carbon black adsorbents remained unaltered by the passage of these viscous mobile phases. Simple molecules that contain many of the important functional groups of biologically active molecules were used in our studies in order to model the mechanism involved in physical liposomal desorption. The role of the active site on the carbonaceous surface could thus be probed selectively. The use of micellar mobile phases based on the surfactant sodium dodecyl sulfate for liquid chromatography (pseudophase chromatography) was first described by Armstrong (4). This approach to elution chromatography bridges the gap between partition and displacement chromatography. The solute may distribute between an aqueous solution of sodium dodecyl sulfate (the concentration of which corresponds to less than the critical micelle concentration), sodium dodecyl sulfate micelles, and the column packing material. Aqueous sodium dodecyl sulfate micelles are considered to have well-defined structures with negatively charged polar head groups at the watermicelle-interface and a nonpolar core. These micelles attract the sodium ions to their surfaces for electrical neutrality. The use of the resulting ionic complex for liquid chromatography can be considered to be analogous to ion chromatography. It is reasonable to expect that the resulting ion-pair will not interact directly with the column support because the surfaces of reverse-phase chromatographic packings are hydrophobic. Two types of mobile phases, zwitterionic micelles and amphipathic liposomes, were used in this current study. Zwitterionic micelles were prepared from molecules that contain the basic quaternary ammonium ion and the acidic sulfonate ion of equal strengths. The aqueous solubilities (CMCs) of these Zwittergents are the result of the chain lengths of the alkyl group substituted on the quaternary ammonium ion. The liposomal mobile phase contained the major component of the surfactant found in the alveolar region of the lung, dipalmitoyl phosphatidylcholine, with minor amounts of dipalmitoyl phosphatidylethanolamine, cholesterol, and protein. Both zwitterionic micelles and liposomes have zwitterionic polar head groups on their surfaces. The physical difference between these two systems is that the liposomes have aqueous cores and a bilayer structure with nonpolar interiors in the bilayers, whereas micelles have nonpolar cores and no bilayer structures. The concentrations of soluble zwitterionic species will vary since the CMCs of these systems are different and may, therefore, significantly affect the resulting chromatography. Other researchers have used solution fluorescence spectroscopy to investigate the kinetics of the release of polynuclear aromatic hydrocarbons (particularly benzo[alpyrene) from the surfaces of various carbon blacks into model phospholipid vesicles and rat lung homogenate (5)(6)(7)(8). Phospholipid vesicles composed mainly of dimyristoyl or dipalmitoyl phosphatidylcholines were used as models for alveolar surfactant. These studies suggested that the surface area (coverage) may play a major role in the release of adsorbed molecules, since release increased with decreasing surface area. It was determined that only a portion of the adsorbed molecules could be released, and it is reasonable to propose, therefore, that the molecules which were released were less strongly bound (1,2). These studies quantified release kinetically by incubating the particle complex with the physiological or model solvents at physiological temperatures but made no attempt to quantify adsorption or desorption on the basis of thermodynamics. Theory A solute molecule that is introduced into a liposomic or micellar mobile phase that is passing over a column packing material will distribute in several ways. If the liposomic or micellar mobile phase interacts reversibly with the surface of the column packing material without producing concentration gradients of liposomes or micelles at the surface (i.e., liquid-solid chromatography is operating), then the following equilibria exist: a) Distribution of the solute between aqueous phase and the liposomes or micelles 1aqueous phase <> liposome Ki4AQ = [iL/[iIAQ [1] [2] The position of this equilibrium is based upon the lipophilicity of the solute. It will be independent of the column packing material but dependent upon the concentration of liposomes or micelles. b) Distribution of the solute between the liposomes or micelles and the column packing material 1liposome <> 1adsorbed [3] KiA,L = 4i]A/['] L 186 [4] If the solute distributes preferentially into the liposomes or micelles, then any retention of the solute by the column packing material must be the result of the distribution of the solute between the liposomes or micelles and the column packing material. The contribution of this distribution will be dependent upon the liposome or micelle concentration and the properties of the column packing material. c) Distribution of the solute between the aqueous phase and the column packing material raphy must be responsible for the separation. Retention of lipophilic materials will increase with liposome or micelle concentration. Separation would tend to be independent of the column packing material once sufficient concentrations of liposomes or micelles are added to the mobile phase. b) Distribution of the solute in the aqueous phase and the thin film of liposomes or micelles on the column packing material: 1laqueous phase <-4 lthin filn [11] 'aqueous phase <e4 1adsorbed [5] KiFAQ = [L]F/[L]AQ KiA,AQ = i]A/['] AQ [6] A hydrophilic solute (i.e., one that is present in the aqueous component of the mobile phase) may interact with the column packing material. The contribution of this interaction may be affected by changing the concentration of the liposomes or micelles in the mobile phase or by the type of column packing material. Therefore, the overall distribution for the solute can be summarized by the following equilibrium: If the liposomic or micellar mobile phase interacts irreversibly with the surface of the column packing material producing concentration gradients of liposomes or micelles at the surface (i.e., liquid-liquid chromatography is operating), then the following additional equilibria exist: a) Distribution of the solute in mobile phase liposomes or micelles, and the thin film of liposomes or micelles, on the column packing material: If this distribution is a major contributor to the retention of the solute by liquid chromatography, then the solute must be interacting with the sorbed liposomes or micelles. Under these conditions, solutes will be retained as a function of their lipophilicities and as a function of the distribution of the liposomes or micelles between the mobile phase and the surface of the column packing materials. Liquid-liquid chromatog-If this distribution is a major contributor to the retention of the solute by liquid chromatography, then the solute must be interacting with the sorbed liposomes or micelles. Under these conditions, solutes will be retained as a function of their lipophilicities and as a function of their distributions with the surface of the column packing materials. Liquid-liquid chromatography must be responsible for the separation. Retention of lipophilic materials may increase with liposome or micelle concentration. Separation would tend to be independent of the column packing material once sufficient concentrations of liposomes or micelles are added to the mobile phase. Therefore, the overall distribution for the solute may be summarized by the following equilibrium: These distributions may be summarized qualitatively. A lipophilic solute will distribute into the liposomes or micelles. Retention will decrease as the concentration of liposomes or micelles increases, providing that the liposomes or micelles do not sorb onto the surface of the column packing material. If the liposomes or micelles do sorb onto the surface of the column packing material, retention of lipophilic solutes will increase with increasing liposome or micelle concentrations. The zwitterionic micelles used as mobile phases are quite different from the liposomal surfactant. A molecule that partitions across the polar head groups of the liposome into the nonpolar interior of the bilayer of a liposome will have an equal probability for repartitioning either across the polar head groups into the nonpolar interior of the liposome, or across the polar head groups into the extra-liposomal matrix. Therefore, a molecule will have an equal probability for [12] leaving the liposome or being carried with the liposome as it is transported by the stream of mobile phase. A micelle, by definition, has only one layer of polar head groups. A molecule that partitions into the micelle will either remain in the interior of the micelle, by virtue of its solubility in the nonpolar region, or it may partition back into the mobile phase. Therefore, liposomes include the following equilibria, which are, by definition, not present with micelles: Distribution of the solute between the bilayer and aqueous phase, or between the bilayer and the intraliposomal space. The latter partitioning would require a solute molecule to cross the entire bilayer: It has been established that every dipalmitoyl phosphatidylcholine molecule is hydrated with 23 molecules of water, 11 of which are in the interior, the remainder being associated with the head groups. The interior water molecules are so tightly bound that they do not freeze even at subzero temperatures. This suggests that the movement of solute molecules into the interior of the liposome will be sterically hindered (9). Therefore, the intercalation of nonpolar solutes into the phospholipid bilayer is the most probable transport mechanism. Proteins are also present in liposomal surfactant, acting as either integrated or associated proteins with the bilayer. Integrated proteins may facilitate partitioning of some molecules into the nonpolar interior of the liposome, especially if they extend completely across the bilayer. Associated proteins may themselves interact with the adsorbent surface, thereby altering the chromatography. Experimental Materials Liposomal Surfactant Mobile Phases. Three different liposomes were used in this study, all prepared by identical procedures using commerically available compounds (Sigma Chemical Company): a) L-a-dipalmitoyl phosphatidylcholine (80.0 mg/mL); L-a-dipalmitoyl phosphatidylethanolamine (0.5 mg/ mL); cholesterol (10.0 mg/mL); albumin (dog) (1.0 mg/mL); b) L-a-dipalmitoyl phosphatidylcholine (80.0 mg/mL; albumin (dog) (1.0 mg/mL); and c) L-adipalmitoyl phosphatidylcholine (80.0 mg/mL); L-adipalmitoyl phosphatidylethanolamine (0.5 mg/mL); albumin (dog) (1.0 mg/mL); cholesteryl oleate (10.0 mg/mL). The constituents of each mobile phase, with the exception of albumin, were dispersed in a mixture of chloroform:methanol (2:1) (v:v). The solutions were evaporated to dryness under nitrogen to remove all organic solvents, leaving a uniform film on the wall of the vessel to be used in subsequent sonications. Tris buffer (0.01 M, pH 8.5) and the albumin were added, and the mixture was sonicated at 230C for 30 min. This approach is based on the method proposed by Huang (9) and produces single-compartment liposomes. This stock solution of liposomes was then stored at 0°C until subsequent use. Aliquots of this concentrate were diluted with Tris buffer and gently vortexed. Temperature. For the experiments with liposomal surfactants, isothermal column temperature was maintained with an oil bath kept at constant temperatures of 220C, 370C, or 470C. Only one temperature, 350C, was used for the experiments with zwitterionic micelles. Procedure Synthetic Alveolar Liposomal Surfactant Experiments. Two liquid chromatographic pumps with a variable wavelength ultraviolet detector, k254 for all compounds except thiophene, k226 (Varian 2000 Series) were used to investigate adsorption-desorption phenomena of all adsorbates. Column packing techniques and column specifications were as previously described (1). Short columns (5 cm, 6 mm OD, 2 mm ID) were packed with the blacks (0.2 mm average particle size). Low flow rates (0.4 mL/min) were found to be stable for extended use. Known aliquots (10 ,uL) of solutions of the adsorbates in water were introduced via a liquid sampling valve (Rheodyne 7125) onto the chromatographic column. The columns were conditioned with mobile phase prior to use. Adsorption was studied isothermally at different temperatures. The void volume of the column was determined by injection without a column, followed by injection with the column empty. This volume minus the volume occupied by the known weight of carbon contained in the column (based on the density of the carbon in the mobile phase) was used to calculate the void volume (V.). The liquid chromatographic data were recorded on a chart recorder (Varian A 25). Postcolumn clarification of the eluting sample-mobile phase was achieved by means of a stream (3.1 mL/min) of clarifying solvent (dichloromethane: methanol: acetonitrile, 1:1:1 v/v/v), via a second pump, connected to a specially designed low-volume connecting "Y." The eluting mixture of liposomal mobile phase and clarifying solvent then passed into a mixer consisting of a column (10 cm long, 0.5 mm ID) packed by gravitation with silanized glass beads (1000-1050 gim). Since clarification was produced postcolumn, no flow corrections were required. Zwitterionic Micelles. A liquid chromatographic pump with a variable wavelength UV detector (X254 for all compounds except thiophene, k226) (Varian 2000 Series), was used in the normal manner for these separations. Known aliquots (10 gL) of solutions of adsorbates were injected via a liquid sampling valve (Rheodyne 7125) onto the chromatographic column. The columns were conditioned with each mobile phase prior to use. The void volume of the column was determined by injection of an unretained transition metal salt. The liquid chromatographic data were recorded on a chart recorder (Varian A25). Physical Characteristics of Carbon Blacks The solvent densities of the blacks were determined by weighing the quantities of mobile phases required to fill a volumetric flask with and without known masses of carbon particles. Low vacuum was used to remove air entrained in the pores of the carbon particles. These densities were used for the determination of column void volumes. The other physical properties of these blacks have been reported previously (1,2). Results and Discussion The carbon black columns, as compared to the C18 and CN reverse-phase columns, required longer times to equilibrate with all the mobile phases before stable baselines were achieved. This indicates that the liposomal and micellar zwitterionic mobile phases are interacting more extensively with carbon surfaces, possibly with the free electrons present in amorphous carbons. The retention data obtained at 370C with C18 and CN packings using liposomes prepared from dipalmitoyl phosphatidylcholine, dipalmitoyl phosphatidylethanolamine, cholesterol, and albumin, are contained in Tables 1 and 2. Most of the solutes studied did not interact significantly with the CN packing material. The basic solutes (aniline, pyridine, and quinoline) did not interact with either the CN or the C18 columns, and retention did not change markedly when the concentration of liposomes in the mobile phase was increased. However, when the solute molecules are nonpolar (e.g., thiophene, nitrobenzene, and benzofuran), increasing liposomal mobile-phase concentration did produce changes in retention volumes. The variation in the capacity factors as a function of liposome concentration show maxima with the CN packing and a continued decrease for the C18 packing ( Figs. 1 and 2). These results suggest that there is some sorption of the liposomes onto the CN packing, whereas there is probably no significant sorption of the liposomes on the C18 packing. An interpretation of the maxima in the CN data could be that at low concentrations of liposomes, liquid-liquid chromatography is operating (i.e., the sorbed liposomes are acting as a thin film stationary phase, Eq. 14), and as concentration increases above 4%, the surface of the packing becomes saturated. No additional liposomes can coat the surface. Therefore, retention decreases as the concentrations (polarities) of the mobile phase and thin film stationary phases approach one another. The retention data on the C18 column may be explained by typical reverse-phase chromatography (Eq. 8), since retention decreases as the polarity of the mobile phase approaches that of the C18 packing, although this decrease is approximately exponential and not linear. Comparable retention data were obtained for the various carbon columns using the same liposomes. These data (Table 3) show similarities with the C18 packings, since solutes interacted less with the carbon surfaces as the liposome concentration was increased. There were a few exceptions to these general observations, such as phenol on N339 and pyridine and aniline on N110, for which minor maxima were observed. The addition of the liposomes significantly reduced the interactions with the surfaces of the carbon blacks, as demonstrated by the fact most solutes were eluted from these carbon surfaces. There were, however, some exceptions to this rule, since some nonpolar solutes were retained even at 12% liposome concentration. Comparison of these data to data previously obtained (2) with pure organic solvent mobile phases (1-hexane, dichloromethane, tetrahydrofuran, and methanol) shows that the surface active sites that did interact with basic solutes in pure organic mobile phases were completely deactivated by the liposomal phases. The elution properties of the solutes with liposomal mobile phases were comparable to those obtained when methanol or tetrahydrofuran were used as mobile phases. The chromatographic retention of each of the individual components of the liposomes was studied using water as the mobile phase because there appeared to be minor sorption of the liposomes onto the surface of the carbon blacks. Dipalmitoyl phosphatidylethanolamine and cholesterol were found to be retained by the carbon columns, while the other components (dipal mitoyl phosphatidylcholine and albumin) were found to elute at the void volumes of the carbon columns. Therefore, liposomes were prepared using only dipalmitoyl phosphatidylcholine and albumin to investigate the mechanism responsible for the separation (referred to in tables and figures as "liposomes DPPC and albumin"). The results obtained with this liposomal mobile phase for three of the carbon blacks at additional liposome concentrations are contained in Table 4. Figure 3 shows the variation in distribution constants for selected solutes as a function of mobile phase concentration on N765. Comparison of these results to those given in Table 3 shows that the mobile phase which did not contain cholesterol or dipalmitoyl phosphatidylethanolamine did not moderate the solute interactions with the carbon surfaces as significantly. This suggests that there was less coating of the adsorbent by the mobile phase containing only dipalmitoyl phosphatidylcholine and albumin than by the mobile phase which contained all four components. Additional experiments were performed in which an aqueous mobile phase was moderated by the addition of albumin. No effect upon retention data were observed, suggesting that the albumin was possibly acting merely as an associated protein with the liposome bilayer and did not, therefore, contribute to retention. In a subsequent series of experiments, cholesterol was 0.6-. replaced by cholesteryl oleate, because the cholesterol in alveolar surfactant is probably esterified. Cholesteryl oleate is less polar than the free sterol and, therefore, should interact more weakly with carbon surfaces. Liposomes were prepared from cholesteryl oleate, dipalmitoyl phosphatidylcholine, dipalmitoyl phosphatidylethanolamine, and albumin [referred to in tables and figures as "liposomes (ester)"]. The results obtained by the addition of these liposomes to water for two of the carbon adsorbents are listed in Table 5. Liposomes (ester) appear to moderate the aqueous mobile phases less than the other two compositions of liposomes (Tables 3 and 4), which supports the hypothesis of reduced interaction of the cholesterol ester with the carbon black adsorbents. This effect is illustrated by the data presented in Figure 4. Retention data for these three liposomal mobile phases were determined at additional temperatures in order to obtain values for the heats of adsorption from Tables 6 through 10. Initially, 220C and 370C were used to obtain these values; however, it was found that the heats of adsorption were much higher when ambient temperature and 370C were used than when 370C and 470C were used (Tables 6-9). Dipalmitoyl phosphatidylcholine has a quasi-fluidic phase transition around 330C, and this transition probably contributes to the values for the heat of adsorption. In general, the heats of adsorption are positive. This suggests that the heats of solution of the solutes in the liposomes are significant contributors to the overall heat of adsorption obtained by liquid chromatography. The magnitude of the heat decreased with increasing liposome concentration. Comparison of the data in Tables 6 through 10 shows that the heats are lower for the liposome (ester) mobile phases. As the carbon surface is changed progressively from N765 (small numbers of active sites) to N110 (greatest number of active sites), there are more extensive interactions with the carbon surfaces by the solutes under investigation (1). These data demonstrate that there can be significant interaction between the adsorbate and the adsorbent surface for some of the carbon surfaces, even in the presence of mobile phases that can coat the surface of the adsorbent. Therefore, liquid-solid chromatography is the dominant process by which solutes interact. Human alveolar surfactant contains additional constituents such as phosphatidylglycerol, phosphatidylinositol, sphingomyelin, phosphatidylserine, and immunological agents. The selection of phosphatidylethanolamine was predicated upon the fact that there appeared to be good agreement in the literature as to the percentage of this lipid in surfactant, whereas there appears to be some disparity in the ethanolamine are present in relatively low concenliterature as to the percentage of phosphatidylglycerol. tration in lung surfactant compared to the phos-King reports that canine pulmonary surfactant phatidylcholine content. Our studies were concerned contains 5% phosphatidyglycerols (11); Pfleger and with phosphatidylcholine, the primary lipid present Thomas (12) and Kikkawa and Smith (13) report 10%. in alveolar surfactant. Phosphatidylethanolamine was The phosphitidylethanolamine content of canine added only to demonstrate that the minor ingredients surfactant ranges from reported values of 7.2 to 3% do indeed play a role in modifying release. (11,13). Both phosphatidylglycerol and phosphatidyl-Studies were also performed using zwitterionic bB, retained irreversibly. bR, retained irreversibly. CB, broad peak. micellar mobile phases (Zwittergent 3-14 and 3-16) at concentrations exceeding their critical micelle concentrations. It was hoped that these mobile phases would be chromatographically and physicochemically similar to the liposomes but would expedite experimentation since no postcolumn clarification would be required. The data obtained for these mobile phases are shown in Table 11, and it may be seen that there was indeed some similarity between these data and the data shown in Table 2. However, when attempts were made to use these mobile phases with the carbon columns, it was impossible to obtain stable detector signals. The carbon columns started to break down and became unstable with these mobile phases. This instability may be due to differences in the critical micelle concentrations of the Zwittergent (3-14 CMC = 0.012%, 3-16 CMC = 0.0012%) as compared to dipalmitoyl phosphatidylcholine (CMC = 1 x 10-I0M). Therefore, experimental studies with these mobile phases were discontinued. It may be postulated that most of the simple molecules used in this study will be desorbed from the surfaces of carbon blacks by actual alveolar lung surfactant (approximately 99% dipalmitoyl phosphatidylcholine liposomes). The time for release will be dependent upon the strength of interaction of the molecule with the adsorbent surface and the heat of solution for the adsorbate molecule in alveolar lung surfactant. Therefore, it is possible that these molecules may be desorbed to produce either acute, short term (rapid release), or long-term chronic (slow release) doses of the agents to the lung tissue and cells. Conclusions These studies provide insight into the relative probability for the desorption of particle-adsorbed molecules into alveolar surfactant by correlating desorption with bioavailability. It is unlikely that the particle will act as a sink for polar molecules in solution in the alveolar surfactant once physical release of an adsorbed molecule from the particle surface has occurred. The layer of alveolar surfactant will coat the particle and block the sites on the carbon surface. However, the situation is the opposite for totally nonpolar molecules in solution in the alveolar surfactant, which would tend to be sorbed onto the surfaces of in situ nonpolar particles. This event could enhance the residence time of nonpolar molecules in the lung. Phagocytic cells such as alveolar macrophages could provide the means for metabolic release of nonpolar molecules. Postphagocytic events may lead to macrophage lysis and release of any unmetabolized nonpolar molecules onto the lung epithelium, which is the deposition and residence site of the carbonaceous particle prior to clearance. A nonpolar molecule is known to be metabolized within phagocytic cells to a more polar metabolite, thus facilitating detoxification. If a cell containing such hydrophilic metabolites of nonpolar adsorbates is lysed, the cellular contents would not be predisposed to readsorption onto any carbonaceous particle surface. The metabolites would, therefore, be less likely to be reingested by phagocytes. Polar metabolites are likely to remain in solution in the alveolar surfactant, and would, therefore, have increased probability for interaction with lung epithelium. Nonpolar molecules that are metabolized more slowly could be cycling between alveolar surfactant and resident carbonaceous particles and may, therefore, remain in deep lung for longer periods of time. Residence time will be further enhanced if the carbonaceous particle does not elicit a significant inflammatory response leading to an increased influx of phagocytic cells to the lung.
v3-fos-license
2021-01-12T14:52:09.476Z
2021-01-12T00:00:00.000
231590281
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpalliatcare.biomedcentral.com/track/pdf/10.1186/s12904-021-00710-9", "pdf_hash": "34c057a3b8f97a6221dba0f79e6382a8af76d3a9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1265", "s2fieldsofstudy": [ "Medicine" ], "sha1": "34c057a3b8f97a6221dba0f79e6382a8af76d3a9", "year": 2021 }
pes2o/s2orc
To hydrate or not to hydrate? The effect of hydration on survival, symptoms and quality of dying among terminally ill cancer patients Background Artificial nutrition and hydration do not prolong survival or improve clinical symptoms of terminally ill cancer patients. Nonetheless, little is known about the effect of artificial hydration (AH) alone on patients’ survival, symptoms or quality of dying. This study explored the relationship between AH and survival, symptoms and quality of dying among terminally ill cancer patients. Methods A pilot prospective, observational study was conducted in the palliative care units of three tertiary hospitals in Taiwan between October 2016 and December 2017. A total of 100 patients were included and classified into the hydration and non-hydration group using 400 mL of fluid per day as the cut-off point. The quality of dying was measured by the Good Death Scale (GDS). Multivariate analyses using Cox’s proportional hazards model were used to assess the survival status of patients, the Wilcoxon rank-sum test for within-group analyses and the Mann-Whitney U test for between-groups analyses to evaluate changes in symptoms between day 0 and 7 in both groups. Logistic regression analysis was used to assess the predictors of a good death. Results There were no differences in survival (p = 0.337) or symptom improvement between the hydration and non-hydration group, however, patients with AH had higher GDS scores. Conclusions AH did not prolong survival nor significantly improve dehydration symptoms of terminally ill cancer patients but it may influence the quality of dying. Communication with patients and their families on the effect of AH may help them better prepared for the end-of-life experience. Background Previous studies have found that patients who receive palliative care have a better quality of life (QOL) as well as end-of-life experience [1][2][3]. In the clinical practice of end-of-life care, terminally ill cancer patients may cease to benefit from oral nutrition and fluids during the very terminal stage [4,5]. However, many family members and even patients themselves request medical staff to continuously administer artificial hydration (AH) [5][6][7]. Therefore, medical professionals often encounter an ethical dilemma related to the provision of artificial nutrition and hydration (ANH) [8,9]. A Taiwanese study found that ANH did not prolong the survival of terminally ill cancer patients [6], and a randomised controlled trial of the influence of AH on terminally ill cancer patients showed no obvious difference in dehydration symptoms, QOL and survival between groups receiving 1 L and 100 ml of fluid daily [10]. In a Japanese study, except for the improvement in membranous dehydration symptoms, hydration provided no benefit, but instead exacerbated fluid overload, induced hypoalbuminemia and failed to correct electrolyte imbalance [11][12][13][14]. Therefore, Japanese clinical guidelines do not suggest that medical professionals administer AH routinely if there is no specific need [15]. Indeed, the patient's condition, fluid overload condition and the attitude of family members are key factors in whether to administer AH [16]. In another Japanese study of over 5000 members of the general population and 800 bereaved family members, 33 to 50% of respondents believed that administering AH to terminally ill patients during the very terminal stage was a part of basic care, with 15 to 31% of respondents believing that AH could relieve symptoms [17]. In a western study, ethnicity played an important role in whether AH was perceived as food or medicine. Ethnic minorities in the United States, such as African Americans, Latinos and Asian Americans (total 66%), were significantly more likely to view AH as food or as both food and medicine than non-Hispanic white subjects (42%) [18]. In an Italian study, patients and their families considered AH as useful medical management, with most preferring the intravenous route, as they thought it could improve clinical conditions and had a positive psychological meaning [7]. Thus, cross-cultural comparison of the role of ANH is both practical and culturally sensitive. Previous research shows that AH may be more harmful than beneficial to terminally ill cancer patients' QOL. However, little is known about the influence of AH on patients' quality of dying, therefore, the primary outcome of this pilot prospective observational study was to investigate the influence of AH on patients' quality of dying. Also, the relationship between AH and the survival and symptoms were assessed. It was hypothesised that AH would not affect the quality of dyig or improve dehydration symptoms or prolong the survival period. Study design and participants A pilot prospective, observational study was conducted in the palliative care units (PCU) of three tertiary hospitals in different cities in Taiwan (National Taiwan University Hospital, Chi-Mei Medical Centre and Kaohsiung Medical University Hospital) between October 2016 and December 2017. These hospitals were selected as they have abundant palliative care experience, as their PCU have been operational for more than 10 years, and they were willing to participate in the clinical observational study. This study was approved by Institutional Review Boards of all three hospitals. The inclusion criteria for study objects were: (1) patients aged 20 years or older, (2) patients with locally advanced or metastatic cancer (histological, cytological or clinical diagnosis), (3) patients who could not have normal oral intake and (4) patients presenting with malaise and at least one of the following dehydration symptoms, delirium, dry mouth or myoclonus. The exclusion criteria were: (1) patients died less than 24 h after the admission to PCU, (2) patients or their family members declined participation and (3) patients with non-cancer terminal disease. All terminally ill patients in these three PCUs were screened for their eligibility during admission. If the patients met the inclusion criteria, the researchers explained the study purpose and protocol to the patients or their families (proxy) if patients had a conscious disturbance. The patients or their proxy provided written informed consent to participate in the study. Outcome measurements On admission to PCU, the need for AH by intravenous or subcutaneous route was according to clinical evaluation and management. After discussion with patients or their families about AH, the duty physician administered the formulated AH to the patients as required. Patients were classified into the hydration group and the nonhydration group using 400 mL per day as the cut-off point, as the bottle of formulated AH which contains glucose and electrolytes is often 400 mL and is routinely administered to terminally ill cancer patients as a basic fluid supply. The daily hydration volume was calculated together with formulated AH and other fluids for medical purposes, such as antibiotics, albumin or blood transfusion. The two groups were compared to determine the effect of hydration on survival time, symptom relief, Good Death Scale (GDS) and the possible side effects of hydration. Other recorded variables included patient's age, gender, primary cancer, Charlson Comorbidity Index, social state, religion, clinical symptoms (including the eating condition by mouth, dyspnoea, fatigue, drowsiness, dry mouth, anorexia, muscle spasm, dysphagia, respiratory tract secretion, oedema, ascites, pleural effusion, bowel obstruction, water intake condition and delirium), blood transfusion, antibiotics use or albumin supply and patient's functional status as measured by the Eastern Cooperative Oncology Group performance status (ECOG). The eating condition by mouth was classified into reduced but more than a mouthful and less than a mouthful every time while eating. Dyspnoea was classified into no and yes, and the dyspnoea level was further divided into exertional only and at rest. The Integrated Palliative care Outcome Scale (IPOS) was developed to measure the patient's symptom severity. The ranking was: 0, not at all; 1, slightly; 2, moderately; 3, severely; 4, overwhelmingly; 5, cannot assess. IPOS was used to assess the fatigue, drowsiness, dry mouth and anorexia symptoms. The myoclonus variable evaluated the patient's worst condition while at rest according to the ranking: 0, none; 1, ≤1 jerk; 2, 2-3 jerks; 3, 4-9 jerks; and 4, ≥10 jerks per 10 s. Dysphagia was divided into no or yes. The respiratory tract secretion variable evaluated the patient's worst condition, the scale was 0, not audible; 1, only audible at the head of the bed; 2, clearly audible at the foot of the bed, and 3, clearly audible at 6 m from the foot of the bed. Lower extremity oedema was measured by observing the leg with less oedema and ranking 0 as none, 1 as mild (< 5 mm), 2 as moderate (5-10 mm) and 3 as severe (> 10 mm). Ascites and pleural effusion were evaluated by clinical examination or imaging, ranking 0 as none, 1 as physically detectable but asymptomatic and 2 as symptomatic. Bowel obstruction was classified into no or yes. The delirium level was evaluated using item 9 of the Memorial Delirium Assessment Scale (MDAS), decreased or increased psychomotor activity. The clinical symptoms were evaluated by the main healthcare professionals at baseline during admission to PCU and 1 week after enrollment until death. Good death scale (GDS) The GDS was used to evaluate the quality of dying [19][20][21] according to five domains scored on a 4-point Likert scale: an awareness that one is dying (0, complete ignorance; 3, complete awareness), acceptance of death peacefully (0, complete unacceptance; 3, complete acceptance), honouring of the patient's wishes (0, no reference to the patient's wishes; 1, following the family's wishes alone; 2, following the patient's wishes alone, and 3, following the wishes of the patient and the family), death timing (0, no preparation; 1, the family alone had prepared; 2, the patient alone had prepared; and 3, both the patient and the family had prepared) and the degree of physical comfort 3 days before death (0, a lot of suffering; 1, suffering; 2, a little suffering; and 3, no suffering). The GDS score, ranging from 0 to 15, was discussed by the experienced palliative care team at the team meeting after each patient died. The score of each item was considered separately and the final score was decided by consensus at the team meeting. The higher the total score, the better the good death status the patient had achieved. The GDS of 68 patients were collected and analysed. A GDS ≧12 indicated a better quality of dying according to the quality indicator set at the National Taiwan University Hospital. Statistical analysis Descriptive analyses were used to assess the differences in demographic characteristics between the two groups. The Kaplan-Meier curve was used to estimate the impact of hydration on survival between the two groups and multivariate analyses using Cox's proportional hazards model were used to assess the survival time of patients. The Wilcoxon rank-sum test was applied for within-group analyses and the Mann-Whitney U test for between-groups analyses to evaluate changes in symptoms between day 0 and 7 in the hydration and nonhydration group. Finally, logistic regression analysis was used to assess the predictors for patients whose GDS ≧12. The R software was used for the statistical analyses (R Core Team, Foundation for Statistical Computing, Vienna, Austria) and a p-value < 0.05 indicated statistical significance. Results A total of 133 patients were eligible for enrolment in this study, of which, 33 were excluded for the following reasons: 8 patients died within 24 h after admission, 7 patients declined to participate, 13 patients had normal oral intake and 5 patients had a non-cancerous disease. Finally, 100 patients were analysed in this study, 22 in the hydration group and 78 in the non-hydration group. The patient recruitment flow chart is shown in Fig. 1, with the demographic and clinical characteristics of the enrolled patients provided in Table 1. The average age of participants was 69.19 ± 12.89 years, with the non-hydration group being significantly older (71.26 ± 11.86 years) than the hydration group (61.86 ± 13.97 years) (p = 0.005). The mortality rate in hospital was significantly higher in the hydration group than the nonhydration group (p = 0.041). The non-hydration group had a better oral intake condition during admission than the hydration group (p = 0.008), and the groups also differed significantly with regards to religion (p = 0.015). There were no significant differences in hospital, gender, education level, cancer type, ECOG, marital status, bowel obstruction, blood transfusion, antibiotics use or albumin use between the two groups (p > 0.05). The survival analysis (Fig. 2) revealed no significance (p = 0.0552) difference in hospital survival time between the non-hydration group and the hydration group. Multivariate analyses of Cox's proportional hazards analysis of 68 deceased patients was applied to identify the prognostic factors related to mortality and the results are shown in Table 2. The risk of death was higher in those with unknown religion (HR: 9.844, 95% CI: 1.426-67.948) and fatigue or oedema during admission (HR: 1.722, 95% CI: 1.072-2.767, and HR: 1.469, 95% CI: 1.068-2.019, respectively). The hospital, age, education level, oral intake status, artificial hydration amount, other physical symptoms and functional status during admission were not related to the risk of death. The change in symptoms between day 0 and day 7 in these two groups are shown in Table 3, with no significant change in fatigue, dry mouth, myoclonus, delirium, dyspnoea or oedema. Regarding drowsiness symptoms, both the hydration and the non-hydration groups had more severe symptoms on day 7 than day 0 (p = 0.008 and 0.038, respectively), with the hydration group having a greater change in drowsiness than the non-hydration group (p = 0.019). Sixty-eight patients died during hospitalisation in the PCUs and logistic regression was applied to analyse the predictors of a good death, as shown in Table 4 Discussion This study investigated the effect of AH on the survival period, symptom relief and quality of dying of terminally ill cancer patients, showing that the administration of AH did not prolong survival or improve dehydration symptoms but was associated with a better quality of dying for terminally ill cancer patients. Morita et al. found that AH did not affect the presence of delirium in terminally ill cancer patients [11]. In a subsequent study, however, the administration of intravenous AH worsened fluid retention symptoms in terminal lung and gastric cancer patients. Reducing the volume of intravenous hydration improved fluid retention symptoms without any deterioration of dehydration symptoms [12]. In terminal patients with abdominal malignancies, patients given 1 L or more AH per day, although they had lower dehydration scores than those who received less than 1 L AH, had higher symptom scores for oedema, ascites and pleural effusion [13]. Nakajima et al. also reported that the symptom scores for oedema, ascites and bronchial secretion were higher in patients who received more than 1 L of AH per day [22], whereas Bruera et al. found no difference in dehydration symptoms, such as fatigue, myoclonus, drowsiness and delirium, 4 days later between patients who received 1 L or 100 ml normal saline per day [10]. Our study showed no significant change in fatigue, delirium, dry mouth or myoclonus after 1 week between the hydration and non-hydration groups. Furthermore, the drowsiness level was more severe in the hydration group. In our study, we used 400 mL as the cut-off point to separate hydration or Fig. 2 The Survival Curve not, whereas previous studies used 1 L as the cut-off point and the groups who received over 1 L AH per day had lower dehydration scores but more fluid retention symptoms. Therefore, giving less than 1 L or even less than 400 ml AH per day does not affect the dry mouth or myoclonus symptoms or exacerbate the severity of oedema or dyspnoea in terminally ill cancer patients after 1 week. In previous studies, many symptoms of terminally ill cancer patients had little relationship to AH [10,[23][24][25], thus routine AH is not recommended for the treatment of terminally ill cancer patients' symptoms. This study also found that the administration of AH to terminally ill cancer patients did not influence survival, similar to previous studies [6,10]. According to Torres-Vigil, African Americans, Latinos and Asian Americans are more likely than non-Hispanic white subjects to view AH as food or as both food and medicine. Indeed, in a previous study, most terminally ill cancer patients' families regarded AH as basic care and wanted continuous AH administration in the hope that the patient's condition would improve [5-7, 16, 17]. Chiu et al. found that most terminally ill cancer patients in the PCU wish to use ANH and want AH, as they and their families [27]. In Taiwan, a culture where food intake is strongly related to healing and hope, AH is regarded as a "lifeline", thus withholding or withdrawal of AH is often mistakenly regarded as unethical by those who do not understand the role of AH in terminally ill cancer patients in the stage of actively dying. Many physicians prescribe AH to allay the fears of family members that the patient might be "starved to death." Once again, our study demonstrated that AH does not prolong a patient's life, so instead of focusing on the patient's intake, healthcare professionals should explain to families the role of AH during the terminal stages. Nevertheless, appropriately administering AH to terminally ill cancer patients could achieve a better quality of dying. In the United States, Cohen et al. found that terminally ill patients and their families believed hydration could bring hope, improve patients' symptoms and enhance QOL [28]. Previous studies which only measure the influence of AH on QOL found no such remarkable effect [10], however, QOL is not equivalent to the quality of dying, which may be influenced by many other factors than those found in QOL. In our study, appropriate hydration was a predictor of better GDS (GDS≧12). Furthermore, as in many other studies, appropriate hydration may meet the psychological needs and expectations of terminally ill cancer patients and their families [5-7, 16, 17] by reducing the burden of making difficult decisions and helping both patients and their families to better prepare to face death. Nevertheless, more research is warranted to validate the impact of AH on the quality of dying of terminally ill cancer patients. This study was a pilot prospective, multi-centre, observational project and the recruited subjects were from different hospitals in northern and southern Taiwan. While the study may be representative of the national cancer patient population, there were several study limitations. First, the number of study subjects was small, so future studies should involve more patients to confirm the effect of hydration on terminally ill cancer patients. Second, the imbalance between groups showed that fewer terminally ill cancer patients in Taiwan receive AH, hence there is a risk of sample bias related to the selection of patients referred for palliative care. Third, this study was not blinded, hence, the clinical assessors may have had some preconceived bias. A randomised controlled trial to decrease the bias of statistical analysis and the placebo effect in the future is warranted. Fourth, we did not record the indication of hydration, whether it was mainly under patient/family desire, or physician-led, this should be considered in future studies. Fifth, it was not possible to collect detailed data of median survival from hydration to death in each group as some patients survived and were discharged from PCUs, hence, were not followed up. However, this study only evaluated the hydration effect of survival status in the hospital, not the whole survival condition. In future, patients could be followed up until death, even if they are discharged. Finally, the two groups of patients were not comparable in terms of the characteristics of age, education and religion. Nevertheless, we performed regression analysis to adjust for these differences. This is a pilot study conducted in Asia, and a large-scale, cross-cultural, multicentre study is ongoing based on the results of this pilot study. Conclusions For terminally ill cancer patients in PCU, AH over 400 mL might not prolong survival nor significantly improve the dehydration symptoms, but appropriate AH may improve the quality of dying. Hydration remains an ethical dilemma, especially in the Asian context. Communication with patients and their families is recommended regarding the benefit and adverse effects of AH, as this may help better prepare them for the final stage of life and achieve a good death. In the future, a large-scale randomised-controlled study of the impact of AH on the quality of dying is warranted.
v3-fos-license
2020-01-19T14:03:12.930Z
2020-01-17T00:00:00.000
210708533
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-020-1194-1", "pdf_hash": "437329b98797c70c988b4eda16a956ff0ad36ebe", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1266", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a01e70a10e48cf54791dae0d829f7720843af334", "year": 2020 }
pes2o/s2orc
Functional interaction between plasma phospholipid fatty acids and insulin resistance in leucocyte telomere length maintenance Background Previous evidence suggests that plasma phospholipid fatty acids (PPFAs) and HOMA insulin resistance (HOMA-IR) are independently related to leukocyte telomere length (LTL). However, there is limited evidence of regarding the effect of their interaction on relative LTL (RLTL). Therefore, here, we aimed to determine the effect of the interaction between PPFAs and HOMA-IR on RLTL. Methods We conducted a cross-sectional study, involving a total of 1246 subjects aged 25–74 years. PPFAs and RLTL were measured, and HOMA-IR was calculated. The effect of the interaction between PPFAs and HOMA-IR on RLTL was assessed by univariate analysis, adjusting for potential confounders. Results In age-adjusted analyses, multivariate linear regression revealed a significant association of the levels of elaidic acid, HOMA-IR, monounsaturated fatty acids (MUFA) and omega-6 (n-6) polyunsaturated fatty acid (PUFA) with RLTL. After adjustment of age and gender, race, smoking, drinking, tea, and exercise, elaidic acid, and omega-3 (n-3) PUFA were negatively associated with RLTL, and HOMA-IR and n-6 PUFA were positively associated with RLTL. These associations were not significantly altered upon further adjustment for anthropometric and biochemical indicators. Meanwhile, the effect of the interaction of elaidic acid and HOMA-IR on RLTL was significant, and remained unchanged even after adjusting for the aforementioned potential confounders. Interestingly, individuals who had the lowest HOMA-IR and the highest elaidic acid levels presented the shortest RLTL. Conclusions Our findings indicated that shorter RLTL was associated with lower HOMA-IR and higher elaidic acid level. These findings might open a new avenue for exploring the potential role of the interaction between elaidic acid and HOMA-IR in maintaining RLTL. Introduction Leukocyte telomere length (LTL) is a simple and reliable biomarker of biological age [1], and it is influenced by dietary factors [2]. Recently, a large number of studies have explored the association between dietary factors and LTL, but their results are inconsistent [3]. For example, Tiainen et al. finds that total fat and saturated fatty acid intake are inversely associated with LTL in elderly men from Helsinki Birth Cohort Study [4], but that there is no association in Spanish children and adolescents [5]. A systematic review shows that, since much researches are based on population dietary surveys, investigation bias may be one of the dominating reasons for the inconsistency of results [2]. A previous study confirms that plasma phospholipid fatty acids (PPFAs) can be used as valid markers reflecting long-term intake of dietary fatty acids [6]. However, to our knowledge, only few studies have explored the relationship between PPFAs and LTL, and the result reveals that trans fatty acid (TFA) levels and, particularly palmitelaidic and linolelaidic acid, are likely negatively associated with telomere length [7]. Some case-control studies, on individuals clinically diagnosed with insulin resistance show, that insulin resistance is associated with chromosomal LTL, and a rise in insulin resistance is the primary reason for the acceleration of telomere shortening [8,9]. A research also manifests that insulin resistance acts as an indicator of healthy aging in humans [10]. The HOMA-insulin resistance (HOMA-IR) index can be used to evaluate the insulin resistance of individuals [11]. However, the results of previous studies on HOMA-IR index and LTL are controversial due to difference in populations studied and analysis methods used [12,13], and Sampson et al. finds telomere length is unrelated to insulin resistance in type 2 diabetes [14]. These conflicting findings suggest that the link between HOMA-IR index and LTL may not fit neatly into a simple paradigm. Therefore, the aim of our study was to investigate whether PPFAs and HOMA-IR affect the relative LTL (RLTL) aged 25-74 years, and to further determine whether the interaction between PPFAs and HOMA-IR plays a role in RLTL modification. Study design and selection of subjects This study was conducted as a cross-sectional survey from 2008 to 2012 in the Qingtongxia County and Pingluo County, of Ningxia Hui Autonomous Region, China. Stratified cluster sampling was carried out to select two villages in each county. A total of 3064 subjects aged 25-74 years were recruited. Each participant underwent a structured in-person questionnaire interview about general demographic characteristics, behavioral lifestyles, and current disease status. Pregnant or breastfeeding women and patients of disease, such as coronary heart disease, diabetes mellitus, severe mental illness, infectious diseases, autoimmune diseases, and tumors were excluded. Height, weight, waist and hip circumference were measured by trained and qualified investigators. Sitting blood pressure was measured by electronic sphygmomanometer (Omron-HEM 7301-IT, China) and body mass index (BMI), waist-to-hip ratio (WHR) were calculated. We employed a mechanical sampling and voluntary principle to select blood samples, and a total of 1458 specimens were collected. We excluded people with aforementioned diseases based on the questionnaire, as well as those of people with missing data on questionnaire, anthropometric measuring and experiment, and finally, 1246 participants were included in the analysis. This study was approved by Ningxia Medical University ethics committee, and all participants were obtained written and verbal information about our study and gave written informed consent. Blood collection and laboratory tests Participants were advised to fast for at least 8 h before blood sample collection. In the morning, specialized physicians drew 5 ml peripheral vein blood from the participants into a non-anticoagulant tube and 2 ml into an EDTA-anticoagulant tube. The blood samples in the EDTA-anticoagulant tubes were centrifuged and kept at − 80°C for testing. The blood samples in the nonanticoagulant tubes were used for a series of laboratory measurements. Fasting plasma glucose was instantly detected by One Touch Ultra 2 (LifeScan, USA). Fasting plasma insulin was determined by enhanced chemiluminescence immunoassay (Tegke Xin Biotech Co., Ltd., Beijing, China). HOMA-IR was calculated using the following formula: [fasting plasma insulin (mU/L) × fasting plasma glucose (mmol/L)] / 22.5. Cholesterol and triglycerides were measured by enzymatic assay (CHOD-PAP, Roche Diagnostics GmbH). PPFAs were determined by gas chromatography (Agilent Technologies 6890 N, America). DNA extraction and RLTL Genomic DNA on leukocytes was extracted from peripheral blood samples using the D3392-04 DNA blood mid kit (Bao Bioengineering Co., Ltd., Japan). The DNA concentration and purity were detected using the Biospec-nano instrument (Shimadzu, Japan), and OD260/OD280 was qualified between 1.6 and 1.9. RLTL was measured using a real-time fluorescence quantitative PCR (Bio-Rad, Germany) method previously described by Cawthon [15]. PCRs were carried out in separate 96well plates, which were segmented into two parts: one for the telomeres (T) and one for the housekeeping gene 36B4 (S), and each plate must contained a reaction for the reference gene and a negative control. Cycling conditions for telomere amplification were as follows: 95°C for 10 min to active the FastStart Enzyme (Bao Bioengineering Co., Ltd., Japan), denaturing 95°C 15 s, annealing at 54°C 2 min, with a total of 22 cycles. Cycling conditions for the 36B4 gene were as follows: the starting condition was the same as that for telomeres, but the annealing condition was 58°C 2 min, with a total of 30 cycles. Finally, the relative T/S ratio, reflecting RLTL, was calculated through the ΔΔCt method, using the following equations: T/S = [2 Ct(telomere) / 2 Ct(36B4) ] − 1 = 2 -ΔCt , RLTL = 2 -ΔCt (need checking sampling) / 2 -ΔCt (reference gene). Statistical analysis Analyses were performed using the SPSS 23.0 statistical package, and a two-tailed P-value less than 0.05 indicated statistical significance. The normality of the distribution of the values of all variables was tested. Continuous variables were expressed as mean ± standard deviation (SD) in for normally distributed data and, as median with interquartile range (IQR, 25th ∼ 75th percentile) for non-normally distributed. Categorical variables were described as frequencies (percentages). Comparisons between the variables in different RLTL groups were performed using ANOVA or Kruskal-Wallis H tests for non-normally distributed data. Multivariate linear regression analyses for PPFAs, HOMA-IR index and RLTL were performed to exclude the influences of potential confounding variables. Then, elaidic acid and HOMA-IR were separately stratified into quintiles, and the means and 95% confidence intervals (95%CI) of RLTL were compared using LSD. In addition, to assess the effect the of interaction of elaidic acid and HOMA-IR on RLTL, univariate analysis was employed. Linear trend tests were used to show trend changes of elaidic acid and HOMA-IR index on RLTL. Results The mean age of the 1246 subjects included in this study was 50.0 years (SD 11.8), and 59.4% of these subjects were female. In a preliminary analysis, the subjects were divided into tertiles according to RLTL. The tertile cutoffs were as follows: 0.645, 1.884, and general characteristics of the study participants were presented in Additional file 1: Table S1. As expected, there were significant differences in RLTL based on age in which the lowest tertile had a higher age compared with the middle and upper tertiles, suggesting that RLTL declined with increasing age (P < 0.001). Interestingly, WHR also showed similar trend. In addition, RLTL appeared to differ by tea consumption (P = 0.0036) and systolic blood pressure (SBP) (P = 0.0045), but not by gender, race, smoking, drinking, exercise, BMI, diastolic blood pressure (DBP) (P > 0.05). Metabolism indicators and PPFAs In the lowest tertile group of RLTL, fasting plasma insulin, fasting plasma glucose and HOMA-IR were significantly lower (P < 0.001); in contrast, high-density lipoprotein cholesterol (HDL-C) was significantly higher (P < 0.001). However, the values of palmitic acid, stearic acid, elaidic acid, α-linolenic acid, arachidonic acid, saturated fatty acids (SFA), monounsaturated fatty acids (MUFA), and omega-3 (n-3) polyunsaturated fatty acids (PUFA) were higher in the lowest tertile group, and all of them decreased with an increase in RLTL (Additional file 1: Table S1). Associations between PPFAs, HOMA-IR and RLTL Next, we applied linear correlation to explore the association of RLTL with PPFAs and metabolic indices. We found that RLTL was inversely associated with palmitic acid, stearic acid, elaidic acid, α-linolenic acid, arachidonic acid, SFA, MUFA, n-3 PUFA, and HDL-C; and it was positively associated with n-6/n-3, fasting plasma glucose, HOMA-IR; then other indicators were not associated with RLTL seemingly. These associations were not appreciably altered by adjustment for age (Additional file 1: Table S2). To evaluate multivariate correlation, the relevant variables mentioned above were converted by log to meet the analytical conditions. The age-adjusted analyse revealed a vital association of the levels of elaidic acid, HOMA-IR index, MUFA and n-6 PUFA with RLTL. After adjustment for gender, race, smoking, drinking, tea, and exercise, elaidic acid and n-3 PUFA were found to be negatively associated with RLTL, and HOMA-IR index and n-6 PUFA were positively associated with RLTL. These associations were not appreciably altered by adjustment for other factors that might influence RLTL, including BMI, WHR, SBP, DBP, total cholesterol (TC), triglycerides (TG) and HDL-C (Additional file 1: Table S3). Effect of the interaction between elaidic acid and HOMA-IR index on RLTL We tested for differences in RLTL between quintiles groups of elaidic acid and the HOMA-IR index. Notably, the higher elaidic acid content, the shorter was the RLTL (P-ANOVA < 0.001, P-trend < 0.001). In subjects with more than 131.43 ng/mL of elaidic acid, the RLTL was 0.30 lower compared with those who were in the first quintile (Fig. 1 a). On the contrary, among subjects in the lowest HOMA-IR quintile group, the RLTL was 0.41 lower compared with those in the middle HOMA-IR quintile group (P-ANOVA < 0.001, P-trend < 0.001) (Fig. 1 b). In addition, there were significant differences between the RLTL values in the lowest quintiles group compared with those in the other quintiles groups of elaidic acid and HOMA-IR (all P < 0.05). When we evaluated potential effects of the interaction between elaidic acid and HOMA-IR index on RLTL, the subjects were divided into tertiles according to elaidic acid and HOMA-IR index. In the rude mode, the effects of elaidic acid, HOMA-IR index, and their combination were related to RLTL (P for interaction = 0.020). Particularly, the interaction associations were not greatly altered by adjustment for potential confounders (P < 0.05 for each interaction) (Additional file 1: Table S4). Moreover, when dividing subjects according to the combination of elaidic acid and HOMA-IR index into three categories, those who simultaneously showed the highest elaidic acid and the lowest HOMA-IR index significantly presented the lowest RLTL compared to those of other groups. Then, considering the group with highest elaidic acid and the lowest HOMA-IR index as the reference, statistically differences were observed between the reference group and others (all P < 0.01). In addition, RLTL showed a decreasing trend with an increase in the elaidic acid level at different HOMA-IR index levels (Fig. 2). Discussion In this cross-sectional study involving 1246 subjects from a rural Chinese population, we observed that the elaidic acid was negatively correlated with RLTL, and HOMA-IR index was positively correlated with RLTL. Particularly, the lowest HOMA-IR index and the highest elaidic acid level were associated with the shortest RLTL in our study. PPFAs and RLTL TFAs are closely related to the occurrence of cardiovascular disease [16], and the underlying mechanism mainly involves that TFAs mediates damage and apoptosis of vascular endothelial cells by activation of death receptor pathways and mitochondrial pathways [17]. A previous study demonstrates that TFAs intake is positively associated with markers of systemic inflammation in women [18]; moreover, epidemiological and clinical studies reveal that a TFA-rich diet obviously increases the serum concentrations of high sensitivity C-reactive protein, interleukin-6, and tumor necrosis factor [18][19][20]. However, LTL ostensibly reflects the cumulated burden of inflammation and oxidative stress over an individual's lifespan [21]. Chan et al. [22] finds that the intake of fats and oils is negatively associated with LTL in elderly females, and Mazidi et al. points out that TFA levels are likely negatively related to telomere length [7]. The findings of the present study were consistent with these findings, and indicated that TFAs are a risk factor for RLTL. At present, the relationship between PUFA and LTL is controversial. It is discovered that the effect of diet on LTL differs greatly based on ethnicity [23], and Kiecoltglaser et al. [24] reports no significant effects of n-3 PUFA on telomere length. However, an intervention study indicates that replacing SFA with PUFA could have a very high impact on reducing the risk of cardiovascular disease [25], mainly because n-3 PUFA may lower the incidence of cardiovascular disease by slowing the rate of LTL abrasion [26]. There have also been researches showing that n-3 PUFA could slow the shortening rate of LTL, while n-6 PUFA could accelerate this rate [27]; these findings are contrary to the results of the present study. However, a randomized controlled trial indicates that treatment with n-3 PUFA do not lead to an increase in telomere length and that there is a trend toward telomere shortening during the intervention period in the population showing mild cognitive impairment [28]. Therefore, further investigation is urgently needed to understand the effects observed here, particularly to clarify whether RLTL is modified through a decrease in n-3 PUFA or (and/or) an increase in n-6 PUFA. HOMA-IR index and RLTL HOMA-IR index is an indicator used to evaluate the level of insulin resistance, which is a pivotal factor affecting the occurrence of diabetes [29] and cardiovascular diseases [30], and a meta-analysis suggests that short LTL may be related to these diseases [31]. However, there is controversy among different scholars about the relation between HOMA-IR and LTL. LTL and HOMA-IR are negatively correlated with the female offspring of gestational diabetes mellitus and newly diagnosed type 2 diabetic patients [32,33]. Aviv et al. [12] discovers that insulin resistance is inversely associated with LTL in premenopausal but not postmenopausal women. Barbieri et al. [10] found that age-adjusted LTL is not significantly associated with HOMA-IR. However, in present study, multiple models revealed that HOMA-IR was positively correlated with RLTL. Interestingly, a systematic review points out that insulin resistance and LTL occurrence exist in a vicious circle, such that insulin resistance, as a state of oxidative stress, could lead to faster LTL shortening, while LTL abrasion would further trigger or aggravate insulin resistance in turn [34]. It is, therefore, worth mentioning that the previous hypothesis that insulin resistance affects on LTL is limited to individuals diagnosed with insulin resistance or diabetes, whereas our study was aimed at evaluating this hypothesis in general populations and excluded diseases that impact insulin metabolism. Thus, the mechanism through which RLTL increases with increasing HOMA-IR index needs to be further explored. Interaction between HOMA-IR and elaidic acid on RLTL It is well known that the combination effect of genetics background, biochemical and metabolic pathways, and behavioral lifestyles is the primary cause of mammalian aging. Telomeres are also influenced by these factors, acting as an indicator of biological aging [2]. This study showed that elaidic acid and HOMA-IR index are the possible LTL influencing factor. We, thus, further explored the interaction between elaidic acid and HOMA-IR index on RLTL. A similar study indicates that longer telomeres are related to lower white bread consumption and higher dietary total antioxidant capacity in Spanish children and adolescents [5]. Moreover, previous studies clearly show that consuming higher amounts of TFAs could cause or exacerbate insulin resistance in subjects with type 2 diabetes [35]. Therefore, it can be speculated that TFAs and insulin levels exhibit a synergistic effect. However, we discovered that the effects of HOMA-IR index and elaidic acid on RLTL may be antagonistic. Accordingly, these effects need to be researched further. We would like to underline the novelty and limitations of the study. The first novelty of our study is its interaction design. Our research provides new insight into the effects of PPFAs and HOMA-IR index on RLTL among general human. Another novel feature of our study is that we used biomarkers instead of dietary fatty acids to predict the effects of PPFAs and HOMA-IR index on RLTL, which, to some extent, is more accurate and reliable. Furthermore, the relatively large sample size of our study allowed us to obtained results with statistical power in identifying significant associations of RLTL with PPFAs and HOMA-IR index. However, we also acknowledge some limitations of this study. First, our data were derived cross-sectionally from a general population and involved single RLTL measurement, which may not reflect telomere dynamics as well as repeated RLTL measurements, which would likely give more precise information. Second, due to the large sample size of this study, RLTL rather than absolute LTL was measured. Moreover, RLTL was determined for human peripheral blood leukocytes, only reflecting the mean telomere length of white cells. Finally, the change in telomere dynamics was mainly caused by oxidative stress and inflammatory response; however, owing to the lack of relative data, we only proved the relationship between PPFAs, HOMA-IR index, and RLTL. Although our current research has shortcomings, we believed that our results still provide helpful information regarding effects of PPFAs and HOMA-IR index on RLTL. Therefore, long term investigations of these effects are required to confirm the present findings and determine whether the results can be extended to other populations. Conclusion In summary, we found PPFAs and HOMA-IR index to be associated with RLTL in the general population after adjusting for potential confounders. Particularly, shorter RLTL was found to be associated with a lower HOMA-IR index and a higher elaidic acid level. Additional file 1: Table S1. Characteristics of subjects categorized by RLTL. Table S2. Correlation analysis of PPFAs and HOMA-IR with RLTL. Table S3. Multiple linear regression analysis of the association of PPFAs and HOMA-IR with RLTL among subjects. Table S4. Interaction between elaidic acid and HOMA-IR. Table S1 describes the characteristics of subjects by RLTL. The subjects were divided into tertiles according to RLTL (Tertiles cut-offs were 0.645, 1.884). Table S2 describes linear correlation of PPFAs and HOMA-IR with RLTL. All variables were log-transformed. Rude model represented that no confounding variables have been adjusted, and adjusted model only adjusted for age. Table S3 presents the multivariate linear regression results. All variables were log-transformed. Model 1 only adjusted for the age. Model 2 adjusted for the age, gender, race, smoking, drinking, tea and exercise and Model 3 adjusted for the BMI, WHR, SBP, DBP, TC, TG, and HDL-C on the basis of Model 2. Table S4 shows the effect of interaction between PPFAs and HOMA-IR on RLTL maintenance. As shown in Table 3, Model 1 represents the rude model and Model 2 only adjusted for the age. Model 3 adjusted for the age, gender, race, smoking, drinking, tea, and exercise, and Model 4 adjusted for the BMI, WHR, SBP, DBP, TC, TG, and HDL-C on the basis of Model 3.
v3-fos-license
2020-11-19T14:45:05.071Z
2020-11-19T00:00:00.000
227034232
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/s12883-020-02000-y", "pdf_hash": "54b02d53abb7d38f1bbe7b97618a299a17a215c0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1268", "s2fieldsofstudy": [ "Medicine" ], "sha1": "54b02d53abb7d38f1bbe7b97618a299a17a215c0", "year": 2020 }
pes2o/s2orc
Intracranial hypertension due to spinal cord tumor misdiagnosed as pseudotumor cerebri syndrome: case report Background Isolated onset of intracranial hypertension due to spinal cord tumor is rare, thus, easily leading to misdiagnosis and delay in effective treatment. Case presentation Herein, we describe a 45-year-old female patient who manifested isolated symptoms and signs of intracranial hypertension and whose condition was initially diagnosed as idiopathic intracranial hypertension and transverse sinus stenosis. The patient received a stent implantation; however, no improvements were observed. One year later her symptoms exacerbated, and during rehospitalization a spinal imaging examination revealed a lumbar tumor. Pathologic evaluation confirmed schwannoma, and tumor resection significantly improved her symptoms, except for poor vision. Conclusions Space-occupying lesions of the spine should be considered in the differential diagnosis of idiopathic intracranial hypertension, even in the absence of spine-localized signs or symptoms. Background Intracranial hypertension (ICH) secondary to spinal cord tumor is a relatively rare, but well-described manifestation. With a concomitant diagnostic ratio of 47% [1], its diagnosis is not particularly difficult when typical spinal symptoms or signs are present. However, absence of spinal cord signs could lead to misdiagnosis of idiopathic intracranial hypertension (IIH), also known as pseudotumor cerebri syndrome, which is defined as ICH with unknown etiology. Once misdiagnosed, a delay in treatment or unnecessary treatments can result in severe consequences for patients. We describe a patient who manifested isolated symptoms and signs of ICH. Her condition was initially misdiagnosed as IIH and venous sinus stenosis. However, one year later a confirmed diagnosis of lumbar schwannoma was made. We also review the literature describing previous similar cases and share our views on associated pathophysiology and therapies. Case presentation A 45-year-old female patient presented with blurred vision for 13 months and was initially diagnosed with papilledema at the ophthalmology department of another hospital. With no effective treatment, her condition deteriorated over the next 2 months. Consequently, she was admitted to the neurology department of our hospital. No other symptoms of ICH (headache, nausea, vomiting) or any other neurologic deficits were present on admission. The medium-build patient had no previous medical history or family history of venous thromboembolism or hematological diseases, and she denied any drug history including the use of oral contraceptive pills. Physical examination revealed a slower light reflex of the left eye, decreased visual acuity of bilateral eyes (more severe on the left) and left nasal hemianopsia. On neurological examination, no neurological localizing signs were observed. Fundus examination showed bilateral papilledema. Brain magnetic resonance imaging (MRI) showed no occupying lesions, only mildly dilated ventricles ( Fig. 1a and b). Lumbar puncture (LP) revealed a significantly elevated pressure of 330 mmH 2 O. Laboratory examination of cerebrospinal fluid (CSF) indicated a normal cell count of 4 × 10 6 /L (normal range 0-8 × 10 6 /L) and a significantly increased protein level of 382.6 mg/dL (normal range 0-43 mg/dL). Given the other negative results of CSF ink stain, blood T-SPOT tuberculosis, and normal computed tomography (CT) scan of the lung, the neurologists excluded intracranial tuberculous and cryptococcal infection. Meningeal carcinomatosis was also not considered due to the slow progression, though CSF cytology was not investigated. A diagnosis of cranial venous sinus thrombosis (CVST) was then suspected and contrast-enhanced magnetic resonance venography (CE-MRV) was performed. The examination showed relatively thin transverse sinus on both sides but no obvious evidence of CVST. Considering the possibility of elevated CSF protein caused by extracranial, and not intracranial, tumors, the neurologists planned for lumbar MRI but failed to perform the examination because of a metal intrauterine device (IUD). Instead, a lumbar CT scan was performed, revealing no abnormalities ( Fig. 1c and d). Still, evidence supporting secondary ICH was insufficient. Hence, a digital subtraction angiography (DSA) was performed which showed an obvious stenosis localized in the area of the right transverse sinus, and the venous pressure gradient near the stenosis reached 28 mmHg, which confirmed Fig. 1 Brain magnetic resonance imaging (MRI) and lumbar computed tomography (CT) performed during the patient's first admission. a Brain MRI demonstrated mild enlargement of the supratentorial ventricle, b the abnormal sign of vacuolar sella in the right optic-radiation of lateral thalamus, bilateral medial temporal lobes, and the insular lobes. c, d No abnormalities were found in lumbar CT at first; however, a retrospective review of the spinal CT scan (d) showed the evidence of enlarged neural foramina and mild vertebral scalloping which suggested a longstanding intradural tumor such as schwannoma venous sinus stenosis ( Fig. 2a and b). She was then diagnosed with IIH and transverse sinus stenosis (TSS). A few days later, the patient received a percutaneous intracranial right transverse sinus stent implantation. DSA showed that the well-dilated stent was in position, and venography identified a smooth venous reflux ( Fig. 2c and d). However, her symptoms did not improve. Two days later, a repeat LP revealed that the pressure was still high at 305 mmH 2 O. Further examinations were suggested but the patient refused and was discharged. One year later, the patient returned to our hospital due to an exacerbated condition. Her vision had worsened and she had been experiencing severe headaches accompanied by frequent nausea and vomiting for over 1 month. A brain MRI revealed obvious hydrocephalus, which was much more serious than that 1 year ago ( Fig. 3a and b). She was admitted to our department and was scheduled to receive shunt surgery. Physical examination showed much slower light reflex of the left eye and worsening visual acuity than before. No neurological localizing signs were found on neurological examination. Ophthalmological examinations revealed severe papilledema and deficits in bilateral visual fields ( Fig. 3c and d). Surprisingly, an LP performed the next day revealed an opening pressure of 320 mmH 2 O with fast-flow leakage of CSF at first, which slowed to no leakage within a short time. Repeated adjustment of the puncture needle was not successful and the Queckenstedt-Stookey test was positive. The CSF test showed a significantly increased protein level of 384.5 mg/dL. By then we highly suspected an intraspinal obstruction, or more precisely, a spinal tumor. Since the patient's metal IUD had been taken out, a total spinal MRI examination was soon arranged. Imaging indicated an intramedullary tumor at the L1-L3 level and no abnormalities in the cervical or thoracic vertebrae ( Fig. 4a and b). One week after admission, the patient underwent resection of the L1-L3 tumor ( Fig. 4c and d). The surgery was successful and pathologic evaluation confirmed schwannoma. On the first postoperative day, her headache was prominently relieved. However, her poor vision was not resolved, which may be caused by optic atrophy due to long-term suppression of ICH. A postoperative head CT scan showed a smaller ventricle than that observed in preoperative imaging. After approximately 2 weeks of recovery, the patient was discharged. A telephone follow-up 1 year later revealed that there had been no further deterioration in her symptoms, though her vision had still not recovered. Discussion and conclusions Compared with typical neurological symptoms due to spinal lesions, such as backache, radiculalgia, sphincter dysfunction, and peripheral neurologic deficits, ICH and hydrocephalus secondary to intraspinal tumors are wellknown but rare. According to a study published in 2004 [1] reporting 269 cases with rare complications, hydrocephalus occurred long before the onset of primary symptoms related to the spinal cord in approximately 29% of cases. Notably, the uniqueness of the current case is that the patient was once misdiagnosed as having IIH, also known as pseudotumor cerebri syndrome. Patients with this disorder can be classified into two subgroups: individuals with unclear causes, also known as IIH, and individuals with identifiable secondary causes. A definite diagnosis of IIH should fulfill the revised diagnostic criteria proposed in 2013 [2]. According to strict diagnostic procedures, misdiagnosing patients as having IIH is somewhat inevitable if no other neurological deficits appear, apart from isolated symptoms or signs of ICH. We reviewed the literature on cases of spinal tumors that were initially misdiagnosed as IIH with absence of spinal symptoms or signs and found three similar cases (Table 1). Ahmed et al. reported two cases with spinal lymphoma [3]. One involved a 49-year-old man who complained of blurred vision and headache. Similar to our case, he had venous stenosis in the posterior sagittal and right transverse sinus. He received a stent implantation, an optic nerve sheath fenestration, and a ventriculoperitoneal shunt to relieve his symptoms. However, a later biopsy confirmed lymphoma. The symptoms of ICH improved after chemotherapy, but his vision was permanently impaired. In another similar case, the patient died 4 months after chemotherapy. Porter et al. reported a patient with initial presentations of bilateral papilledema and blurred vision who was considered to have IIH at first [4]. Not until his spinal cord symptoms developed was the diagnosis confirmed to be spinal astrocytoma. Eventually the patient became permanently paraplegic. Returning to the present case, the significantly increased CSF protein level and the mildly enlarged ventricles failed to meet the diagnostic criteria of IIH and were highly indicative of a secondary pathogenesis. It was unfortunate that the patient was contraindicated to MRI examination, on her first admission, because of the metal IUD. Even so, in retrospect, we should always keep in mind that in clinical practice once a secondary cause of ICH is suspected, further necessary examinations are urgently required. In this case, persuading the patient to remove the IUD earlier or performing the MRI at a lower magnetic field intensity could have benefited her prognosis. The above cases indicate that isolated initial manifestation of ICH with lack of localizing signs in the spinal cord could lead to misdiagnosis and delayed surgical intervention, which ultimately, could result in disastrous consequences. Clinicians need a broader differential spectrum of ICH for accurate and timely diagnosis. Interestingly, our patient was misdiagnosed as having IIH accompanied by TSS. TSS generally represents a common sign of ICH [5,6]. Early in 1995, King et al. discovered an elevation in cranial venous pressure in patients with IIH, which led to a growing recognition that local venous obstruction played an important role in the pathogenesis of IIH [7]. Later, other studies reported similar findings [8,9]. Given the strong evidence gathered over the past two decades, venous sinus stenosis, in particular, TSS, has long been viewed as a contributor to the pathophysiology of IIH [10]. It is postulated that in patients with IIH, venous sinus stenosis and a subsequent increase in venous pressure initially appear to be the downstream consequence of elevated CSF pressure. The mild rise in intracranial pressure would in turn reflect on the venous compression and extrinsic stenosis, leading to a persistent elevation in pressure gradient between the cerebral venous sinus stenosis and subarachnoid space. This would result in decreased CSF absorption into the superior sagittal sinus and reduced drainage through the arachnoid granulations, thereby, further elevating intracranial pressure and perpetuating the positive feedback [11][12][13][14]. Retrospectively, it is more likely that the stenosis over the junction area of the right transverse sinus in our patient was the result of ICH rather than the reason for it. Endovascular stenting implantation in the stenosis is considered safe and effective for treating patients with venous hypertension with a stenosis and pressure gradient [14]; however, in our patient, no response was observed after stent implantation, which also indicated a secondary cause of ICH. It is worth mentioning that the Queckenstedt-Stookey test plays a critical role in the final diagnosis. If the Queckenstedt-Stookey test would have been performed at her first admission, it might have been possible to diagnose the underlying lumbar tumor earlier, so as to improve her deteriorating vision. Hence, the present case also highlights the importance of an overall clinical examination in clinical practice. As yet, the pathophysiological mechanism underlying the association between spinal tumors and ICH is not well established. Prevailing hypotheses include intraspinal neoplastic suppression, intracranial metastasis, increased CSF viscosity, increased CSF fibrinogen, reduced CSF compliance, and neoplastic arachnoiditis. Almost any type of spinal lesion can manifest signs of ICH. The relationship between ICH and cervical tumors can be explained. Any obstruction in the upper spinal canal could result in a rise in venous pressure and subsequent ICH, because venous reflux of the upper trunk mainly proceeds in the superior vena cava that localizes near the third thoracic vertebra. However, interpreting the pathophysiology in lower spinal tumors is not so easy and led us to question the present case manifesting as a lumbar schwannoma. A possible explanation regarding CSF dynamics is that a spinal obstruction could compromise lumbosacral region compliance, which is also known as an "elastic reservoir" for CSF flow, thereby isolating the spinal subarachnoid space from the intracranial region and impacting normal CSF compensation due to pressure alterations [1]. The findings of Morandi et al. in cases of benign spinal cord tumor support this theory [15]. Furthermore, a significant increase in CSF protein is a common sign in these patients. Several researchers have correlated the increased CSF protein with increased CSF viscosity, which would in turn increase the resistance to CSF absorption by arachnoid villi [4,16,17]. However, many studies disagree with this theory, because high CSF protein levels do not occur in all cases and, in fact, they have little effect on CSF viscosity [1,18]. Other investigations have also reported that the increased CSF fibrinogen would suppress the absorption of CSF and cause ICH. The transformation of CSF fibrinogen into fibrin in the subarachnoid space and villi may increase the resistance to CSF outflow. The abnormal presence of fibrinogen may result from chronic inflammation, breakdown of the blood-brain barrier, or subarachnoid hemorrhage [19]. Furthermore, direct secretion from the neoplasm or a meningeal reaction to the neoplasm can both increase CSF protein [4]. In our case, no intracranial lesion was found and the tumor was located inside the spinal cord in the lumbar segment. Therefore, the evidence of neoplastic compression and intracranial metastasis causing cranial venous hypertension was inadequate for the present case. Considering the prominently lowered postoperative CSF pressure and protein level, as well as the improved symptoms, the potential mechanism underlying our case may be due to decreased CSF absorption. We suppose that increased CSF protein promotes elevation of CSF fibrinogen as well as CSF viscosity. Two pathological factors block the pathway of CSF reflux, which travels from a semipermeable membrane of the arachnoid granulation to the superior sagittal sinus, thus reducing CSF absorption and leading to an increase in intracranial pressure. For treatment of such cases, resection of the primary spinal lesion is the first choice. Many patients can benefit from this surgery, even though some may have refractory hydrocephalus and may need to undergo shunt surgery. Moreover, if shunt surgery is performed first, there is a risk of carcinomatous spread in malignant cases. In conclusion, this rare case highlights that clinicians should always maintain a broad differential spectrum of Idiopathic intracranial hypertension, even with the absence of intracranial lesions or neurologic deficits. Abnormal CSF compositions and imaging signs of hydrocephalus in cases with ICH may indicate spinal pathology. Misdiagnosis might result in unnecessary treatments, or more importantly, lead to permanent damage in patients.
v3-fos-license
2020-10-28T19:10:25.899Z
2020-10-31T00:00:00.000
225150869
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/CF0B22664974.pdf", "pdf_hash": "0f8e4787dba4308da19552fad0434c3709a36e4a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1271", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Economics" ], "sha1": "ae1f1055985cb6852907c95a1873fc4209603603", "year": 2020 }
pes2o/s2orc
Advancing adoption of genetically modified crops as food and feed in Africa: The case of Kenya Genetically modified organisms (GMOs) and Genetic Engineering (GE) technology has been around since mid 1990s. Numerous successful applications of genetically modified (GM) crops have been recorded in different parts of the world. The technology has been adopted steadily in several countries with acreage under GM crops steadily increasing in many cases. Socio-economic studies show GMO adoption result in improved productivity, reduced cost of labour, and reduced pesticide use. More than 20 years later and in spite of the foregoing, opposition to GMO remains almost the same especially in Kenya. Although the past few months have seen a move toward favourable enabling and political good will, a current report published in the economist magazine indicated that agricultural productivity in Africa and Kenya in particular has remained stagnant for the last 40 years. This points to the vulnerability of Kenya in ensuring food security for its growing population which has actually increased at least 6 folds since 1960s. For food security to be achieved, considerations should be given to traditional as well as modern technologies that can greatly increase productivity, in the shortest time possible, while also taking care devastating effects of pests, diseases, drought, poor soils, and climate change. The genetically engineered crops have been eaten by millions of people from around the world, and have also been fed to millions of animals and poultry all over the world. For Kenya to move forward toward sustainable food security, bold, deliberate actions based on sound science and embedded in the uniqueness of the Kenyan agricultural systems and culture ought to be taken into consideration. This paper reviews the matter of GM foods, their implications for Kenya and all the underlying factors meriting consideration. What is the GMO technology: Process or product? Humans have been improving the quality of domesticated crops for thousands of years and this has mostly been through conventional breeding where important traits are encouraged, picked, and passed down from one generation to the next (Keetch et al., 2014;European Safety, 2019). Genetic engineering however picks over from here and aims to achieve the modification of the crops by selecting novel genes from other crops or *Corresponding author. E-mail: olooo.odhiambo@gmail.com or benard.oloo@egerton.ac.ke. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License organisms and incorporating these into the gene of interest of distantly related species (Weebadde and Maredia, 2011). This has proven to be a faster way than the 10 to 15 years of conventional breeding often required to improve a crop for general release. Whereas with conventional breeding, over 1000 to 10,000 genetic material is transferred between species, the genetic engineering aims at a single gene or few well selected novel genes to be moved across species (Baudo et al., 2006). The resulting food crops are referred to as genetically engineered (GE) or genetically modified (GM) foods. To this extent, genetic engineering has been lauded by proponents as faster, more targeted, more precise and efficient way of acquiring intended traits than through conventional breeding (The Royal Society, 2020). So far, many crops have been modified by genetic engineering technology to provide beneficial traits to farmers ( GMO Answers, 2019). Most of the crops have been modified for, herbicide tolerance, insect pest resistance, disease resistance, among other farmer benefiting traits ( GMO Answers, 2019). The most widely grown GM crops by acreage have been maize or corn, cotton, soy bean, and canola, while other crops are also being grown such as brinjal, papaya, and others ( GMO Answers, 2019). In Kenya, several applications have been made for commercialization of GMO crops (National Biosafety Authority, 2019). Top among them being Bacillus thuringiensis (Bt) maize, B. thuringiensis Cotton, Water Efficient Maize for Africa (WEMA) maize, virus resistant cassava, virus resistant bananas, late blight fungal disease resistant potatoes, among others (National Biosafety Authority, 2019). The WEMA project mainly focused on drought tolerance technology through conventional breeding. Its successor, the TELA is working toward introducing the Bt. gene to WEMA varieties. Yet, none of these applications have ever gone past field trials and unto commercialization, except for B. thuringiensis cotton which only recently got the go ahead for commercialization through cabinet approval in December 2019 (Vijida, 2019). What are the motivators of this technology? Farmers had been losing money for years from their crops due to attacks from pests, diseases, and weeds while yields were stagnating or diminishing following success with green revolution (GMO Answers, 2019). To continue improving farm productivity means finding remedies to the pests and diseases devastating the crops and better ways to reduce farm expenses especially labour; given the increased cost of labour in many parts of the world (Alhassan and Adekunle, 2014). On the other hand, the industry supplying the agro-chemicals have been under pressure regarding the toxic nature of the insecticides and herbicides (especially given that a majority of the insecticides and herbicides residues end Oloo et al. 695 in water ways, and polluting the soils since a very small percentage is actually being absorbed by plants). Furthermore, farmers have realized the fact that some of the target insects and weeds have developed resistance to the insecticides and herbicides. Alternatives to both challenges occupied the minds of the industry for a while and by the time the agro-chemical industry revealed that they could actually transform plants with the B. thuringiensis gene, so the crop will produce the toxin by itself. This means farmers could stop heavy reliance on chemical applications, this was received as extremely good news. However, control of weeds was still a major challenge especially in large scale farms. Herbicides especially glyphosates were being used in large scale but they could not be used on the crops because they would kill them too since the active ingredient is a broad spectrum and systemic to plants. A herbicide tolerant and insect resistant B. thuringiensis maize was the novel answer the industry introduced for the farmers who were yearning for a way to reduce not just cost of pesticides but more so, labour and mechanical costs of controlling weeds. The glyphosate tolerant and insect resistant B. thuringiensis crops were well received by farmers and contributed to ease the need for weed control and insect control by mechanical or by manual and other means. In retrospect, the farmers may have been over motivated by this prospect resulting into planting much B. thuringiensis and herbicide tolerant crops and ignoring the other Integrated Pest Management Practices (IPM) that would have helped to prevent or delay the development of resistance for much longer. Despite this challenge, farmers in many countries around the world including Canada, USA, Japan, Argentina, India, and the Philippines have enjoyed the advantages brought about by GMOs for more than two decades (ISAAA, 2019). Enumerated benefits of adoption of GMO crops in different countries A study sponsored or conducted by the European Commission to trace the benefits of GM crops for past 19 years from 1996 to 2014 suggested a drastic reduction in use of pesticides by 581 million kg thus reducing environmental footprint associated with GMO by 20% (Brookes and Barfoot, 2017). In USA alone, planting GMOs reduced pesticide use and resulted in reduction of 46.4 million pounds in 2003. The B. thuringiensis cotton in China resulted in reduced use of formulated pesticides by 78,000 tonnes in 2001 an equivalent of a quarter of all the pesticides sprayed in China in the mid-1990s (European Commission, 2010). With reduction of pesticide use, comes the reduced exposure and potential poisoning of farmers and farm workers. Insecticide used in control cotton bollworms reduced from as much as 5,748 metric tons of active ingredients in 2001 to as low as 222 metric tons of active ingredients in 2011; a 96% reduction (Perry et al., 2016). The adoption of GMO technology equally contributed to continued expansion of no-tillage agriculture in the U.S. saving 1 billion tons of soils through herbicide tolerant crops (Perry et al., 2016). B. thuringiensis Cotton in US and Australia has been documented to result in improved number and diversity of beneficial insect in the cotton growing fields (Qaim and Klumper, 2014). Even with all the enumerated benefits of GMOs, Kenyan farmers were not sure how much longer they will have to wait until they grow these crops in their fields. Thankfully, the cabinet approval of B. thuringiensis cotton"s commercialization was well received; and seen by many as a positive step toward ensuring the muchneeded progress. In deed, the country has made good progress since then by flagging off of the planting of GMO cotton in selected farms by the Cabinet Secretary for Agriculture. The pressing question remains: How did a country with such great enthusiasm about the promises of biotechnology turn to one of such skepticism after all the research available for consideration? Kenya was one of the first countries in the world not just in Africa to ratify the Cartagena Protocol on Biosafety to the Convention on Biological Diversity in the Route to Food (Mungai, 2019). The National Biosafety Authority website and documented regulations show that, there is enough preparation to proceed with the aspects of ensuring GMO adoption. In spite of the efforts of investors, scientists and GMO enthusiasts, lack of a favourable environment has led to some of these pro-GMO crusaders developing a cold feet with their efforts in the country. As a matter of fact, Kenya is experiencing loss of opportunity to attract additional investment for the continuous development of the technology as most organizations shift their support to countries with favourable political climate (National Biosafety Authority, 2019). MATERIALS AND METHODS The results presented is a culmination of over 3 months of research involving in depth discussions with various stakeholders including farmers, seed industry representatives, academia, biotechnology industry visits, lectures by prominent industry players in GMO technology and participating in international short courses and training offered by the World Technology Access Program (WorldTAP) during the senior authors stay at Michigan State University through the Norman E. Borlaug Fellowship Programme. RESULTS From the results of this study, it is clear that there are several reasons that stand in the way for adoption of GMOs in Africa in general and Kenya in particular. The issues range from felt or unfounded fears regarding effect of GMO, the mixed signals from EU about health and safety of GM foods, the potential risk of GMOs to the environment and biodiversity. Other reasons include the fear of possible effect on non-target organisms and potential development of resistance to insect-pests by the GM crops. Lastly, food safety fears of GMOs remain pertinent in some parts of the continent. These cases are presented in detail in the following. Unearthing the fears of GMO adoption Kenya already drafted Regulations on Biosafety in 1998 and was poised to be one of the few counties to take advantage of the new technology when it was first released to this part of the world. However, the moratorium placed on GMOs in 2012 by the Ministry of Health, dealt a big blow to continued development, promotion, and adoption of GMO crops in Kenya (Ministry of Sanitation and Public Health (MoSPH), 2012). The ban has stayed on seven years later and with the latest direction from government being that GMO activity in the country will be handled on case by case basis. This is what delivered the cabinet approval for the commercialization of the B. thuringiensis cotton in December, 2019 (Vijida, 2019). There is evidence of many promising projects and opportunities to improve on African crops and especially so from the public research institutions especially, Kenya Agricultural and Livestock Research Organization (KARLO), International Livestock Research Institute (ILRI) and International Centre for Insect Physiology and Ecology (ICIPE). However, lack of funds and expertise, have been noted as a bottle neck to unveiling technology due to the highly regulated nature of biosafety (Wambugu and Kamanga, 2014). It is also vital to note that the appropriateness of specific technologies depend on current agricultural systems, practices, and surrounding natural environment especially with regard to environmental safety (Wambugu and Kamanga, 2014). This fact tends to be ignored and instead the opponents of biotechnology prefer to wholesomely dismiss the technology without considering socio-economic benefits, and utility of the technology as an option for safeguarding environmental resources. The mixed signal: EU, WHO and UN? To follow the science or the politics? The European Food Safety Authority (EFSA) studies have repeatedly demonstrated that the GMO foods are as safe both to environment and to humans as their conventional counterparts (European Commission, 2010). Yet, the European Union (EU) still has restrictions on growing of GMO crops in Europe. This stance has bewildered many observers (Tagliabue, 2017). The European Commission grants authorizations to place GM food and feed on the European market for a period of ten years, yet they have constantly requested EFSA to publish as much as about 5 new guidelines just in the past 5 years alone and then for some reason repeatedly ignoring the EFSA opinions that demonstrate that GMOs are just as safe as their conventional counterparts. Whereas the EU may be justified to call for these improvements, to the casual observer, the EU may simply be doing these to avoid the backlash from the technology developers or simply laying more layers of roadblocks through over regulations. This leaves developing countries especially those with limited capacity at a point of confusion as it seems the European Commissions" decisions on GMO crops are rather the result of political than science-based decisions. But this is where developing countries ought to break from the mold and begin to chart their own course because they must realize the priority for Europe and Africa are different. This realization would help developing countries handle the mixed signals not just coming from EU, but also from the WHO and FAO-UN. The FAO-UN is on one hand calling for addressing of malnutrition by the developing nations based on modern biotechnology (FAO, 2013). While on the other hand, is warning of the dangers of GMO to the environment (FAO, 2013). GMO for Africa: What are the drivers and opposition? Unintended and adventitious harmful effects of GMOs on the environment are one reason of the fiercest oppositions raised by opponents of GM crops (Wambugu and Kamanga, 2014). Yet, more than 100 independent, U.S. European, and international scientific societies have addressed the relative safety of GMO and their conventional counterparts and arrived at the conclusion that properly regulated GMOs, pose no new risk to the environment and human health as compared to conventional counterparts (The National Academies of Sciences Engineering Medicine, 2016). Studies have also revealed that farming insect resisitant B. thuringiensis corn in the Philippines has not demonstrated reduced number and diversity of insects (Pringle, 2013). A 10 year study commissioned by USDA in 2006 demonstrated that there is no increased risk of invasiveness or persistence in wild habitats for GM crops (oilseed rape, potatoes, corn, and sugar beet) and traits (herbicide tolerance, insect protection) (Fernandez-Cornejo and Caswell, 2014). The same conclusions were arrived at on the basis of a study by the European Commission (European Commission, 2010). These studies do not conclude all possibility for crops to form persistent weedy relatives only that the productive GM crops are unlikely to survive out of cultivation conditions. The more reason the studies have always focused on case by case evaluation and recommend need for post release Monitoring and Evaluation for 10 years or more after release (European Commission, 2010). From a development point of view however, it is critical to place the opposition to GMOs in the context of opposition to other technologies that experienced similar if not even worse opposition. Normally, there are many reasons why societies oppose and block new technology besides the inherent nature of the technology itself. Most of the initial opposition has to do with the creative disruption that new technologies embody across a number of different fields. That is the sole reason why society must not discourage the unchanted voices of our time. They may be the best people we need to leapfrog the current set of challenges and have a quantum leap (Juma, 2016). As Einstein once said, "Problems cannot be solved at the same level of thinking that created them in the first place". Innovations and inventions are how we circumvent this closed thinking by employing a different way or approach to solving our current problems. The apparent opposition to new technologies including GMOs may need to be understood in this context. This may be the reason Wambugu and Kamanga (2014), conclude that without serious investment, the support of critical mass at regulation, astuteness on government political affairs in gaining good will, excellent issue management of GMO lobby groups, and well-resourced outreach, GMOs are likely to fail. This list is a true reflection of the matters that are not part of the GMO science yet must be tackled to address the challenges and drive adoption of the technology. How about direct effects on the nonintended/targeted organisms? The early warning of B. thuringiensis crops possible impact on Monarch butterfly larvae caused panic and many people begun to wonder whether there be any possibilities that the GMO crops were actually causing death of the USA"s most loved butterfly (Holt-Giménez, 2019). In 2001 a collaborative research by Scientists from Canada and U.S. observed that the possibility was negligible (Sears et al., 2001b). Report by U.S. Environmental Protection Agency (EPA) stated that according to data presented, B. thuringiensis did not present any unreasonable adverse effect on the unintended wildlife in the environment (Sears et al., 2001a). Despite this, opponents are still raising the same questions (Sears et al., 2001a). How about development of insect-pest resistance? The management of insect resistance is a concern for scientists and governments including regulatory authorities (Purdue University, 2019). The recommendation of biosafety practice is ensuring that there must be a provision for associate refuge of non-GMO crops so the insects grow without selection pressure to insect resistant varieties (Difonzo, 2019). Post release monitoring and evaluation of GM crop and surrounding environment also acts as a tool to control any resistance. Post release monitoring requires a welltrained and coordinated effort and an information sharing forum all across the country. Recent GMO developments now use a multiple number of genes conferring different types of traits (stacked genes) and these can help discourage the selection pressure burden that would lead to development of resistance. For this to work, county governments in Kenya must be empowered to report any early cases of potential "exhaustion" of resistance and appropriate action to be taken before it gets out of hand (Difonzo, 2019). The best agronomic practices and integrated pest management (IPM) strategies are vital for resistance management (Difonzo, 2019). Safety of GMOs One critical requirement of food and any new products is that it must not just satisfy hunger. It must be safe, nutritious and acceptable by consumers as a legitimate source of food for which the consumers make independent choices and not out of coercion. The GMO foods have not escaped this aspect one bit. The main opposition that has been witnessed as far as GMO foods is concerned has centered around the three major areas: food safety, environmental safety, and socio-cultural aspects (The National Academies of Sciences Engineering Medicine, 2016). Food safety is the most critical of these factors while talking purely from a science-based perspective. Whereas in many cases there are ways those food scientists and safety experts use to test safety of products including chemical, biological and physical testing. Such approaches are best usable where a single ingredient is at stake. This case is not very effective for whole foods and hence the reason why scientists have resorted to other means to arrive at a determination of safety of GMO to plants and feeds. This concept that was embraced and ratified by Cartagena Protocol for Biosafety, is the concept of substantial equivalence. The aim is not to determine the absolute safety of a GMO but to compare its main food nutrition and safety related attributes to the conventional counterpart. Of all the over 10 most commercially grown GMO foods, scientific consensus so far reported indicate that there are no significant harmful effects on health of both food and feed attributable to the consumption of GMOs (The National Academies of Sciences Engineering Medicine, 2016). Decision making parameters for accelerating GMO adoption in Kenya From the results it is evident that fear presents one of the most prominent reasons for negative view of GM crops in the world and especially in Kenya. The fear may be real or imagined. To address these issues, the proper understanding of potential risks and benefits of GMOs, the nature of the forgone opportunity cost is vital. Furthermore, the potential risks must be stated clearly and the role of politics in enhancing or hindering steps in the GM adoption process. It is only by doing this that the countries can make informed choices as unfounded fear is cleared. Any real fears will then be evaluated by informed decision based on risk assessment and characterization. The discussion dissects the issues and offers information that can be considered in respect to decision making on GMOs in Kenya and Africa. The benefits of biotechnology One of the factors that do not get much attention in the GMO debate is the attendant benefits that many countries have enjoyed due to the introduction of biotechnology. A study assessing the global economic and environmental impacts of biotech crops for the first twenty-one years of adoption , showed that biotechnology has reduced pesticide spraying by 671.2 million kg and has reduced environmental footprint associated with pesticide use by 18.4% (Perry et al., 2016). The technology has also significantly reduced the release of greenhouse gas emissions from agriculture equivalent to removing 16.75 million cars from the road (Brookes and Barfoot, 2017). At the same time, a metaanalysis by of the impact of biotechnology (Qaim and Klumper, 2014), reported that GM technology has reduced pesticide use by about 37%. In the USA alone between 1998 and 2011, non-adopters of herbicide resistance corn reduced their herbicide use by 1.2% while adopters of insect tolerant crops reduced insecticide use by 11.2% (Perry et al., 2016). Other studies detailing the impact of GMOs in China, reported that the use of B. thuringiensis cotton resulted in reduction in pesticide use of 78,000 tons of formulated pesticides in 2001. This value accounted for about 25% of all the pesticides sprayed in China in the mid-1990s (Tao and Shudong, 2006). In yet another important study by the USDA covering data collected from 1999 to 2012, it was shown that B. thuringiensis cotton adoption has caused a significant reduction in pesticides use in India (Fernandez-Cornejo and Caswell, 2014). There are many other benefits that go unmentioned as opponents lure the public to most controversial and sometimes immeasurable issues which appeal to feelings and emotions other than facts. Opportunity cost of delayed use or adoption of biotechnology Studies have tempted to address the matter of forgone benefits of delayed adoption of important food crops improvement by GE technology in Africa. One of such papers reported work done by Wesseler et al. (2017) in which they examined the opportunity cost for delay in adoption of biotechnology in several countries in Africa. Under their estimation, their model projects such a delayed cause of action in implementing GMO technology to be very substantial. For example, they estimated that the cost of one year delay in approval of the pod-borer resistant cowpea in Nigeria would cost the country about USD 33 million to 46 million and result in loss of 100 and 3,000 lives hypothetically. Given that Kenya too had an opportunity to adopt GMO crops after South Africa, Wesseler et al. (2017) estimated the forgone benefit of that delay too to the Kenyan economy. According to report by Insect Resistant Maize Insect Resistant Maize for Africa (IRMA), it was very possible that Kenya would have adopted GMO technology soon after South Africa but this did not happen and hence, up to 4000 lives could theatrically have been saved. However, this must be looked at in the context of complacency in government and where all other factors like improved production systems, irrigation use of improved seeds among other factors are kept constant (Wesseler et al., 2017). Risk assessment and capacity of adoption of GMO in the Kenya The Kenya National Biosafety Authority was established by the Biosafety Act No. 2 of 2009 to exercise general supervision and control over the transfer, handling, and use of genetically modified organisms (GMOs) (National Biosafety Authority, 2019). Because of the nature and the complex matrix of food, the purpose of safety tests is to evaluate that the GM crop is just as safe as its widely consumed relative both to humans and to the environment. The safety assessment criterion is based on internationally developed and agreed guidelines, and best practices by UN-FAO, WHO, Codex Alimentarius and other respected global organizations. According to the procedures published by the National Biosafety Authority, once the committee has received an application for evaluation of a request to commercialize a GMO crop (dossier), they must publish it within 14 days to the public. This is where the competence and systems based functioning regulatory authorities come into play. The Biosafety Authority does not only need to understand all the requirements of the dossier, but also must be able to determine based on sound science whether the application has captured sufficient data and material as to allow unbiased assessment of the application. The National Biosafety Authority of Kenya can request for additional data, should they need it to help with a determination about a product. Furthermore, they have access to expert resources not just from the pool of scientists in Kenya but Oloo et al. 699 even from the African Union and other Biosafety Networks in Africa who can help with specific expertise necessary for the evaluation (Wangari, 2019). The NBA will then request for written comments from public, scientists, and other interested parties to be submitted within 21 days of their publishing the dossier. After this, the evaluation is done and the NBA stipulates to communicate back to applicant within at least 90 days of the application and not more than 150 days after the receipt of the complete application (National Biosafety Authority, 2019). There are two important fundamentals of science that allows such a system to work and provide checks and balances along the process of commercialization of GMO used as food and feed. The first, is the concept of good laboratory practice (GLP) that is done to ensure reproducibility of test results. Second, is the principle of direct data entry which stipulates that any resulting data must be entered as to and when an observation is made. Lastly, scientific results go through a rigorous double blinded peer review which often removes bias and allow for work to be examined on the basis of its own scientific merit and not based on subjective means such as consideration of who the authors are. Whereas the regulatory demands of GMOs have created a scenario where it is very expensive currently to deregulate (commercialize) the crops, a case for joint Risk Assessment body to service the continent and allow countries share the burden of the regulatory process of GMOs such as establishment of East Africa Food Safety Authority (EFSA) seems a plausible idea. Africa may benefit greatly from the creation of an Africa Biosafety Authority body that resembles the EFSA that can at least reduce the burden on the less developed countries to afford commercializing of GMOs. By so doing experiences can be quickly shared and expertise quickly deployed among countries as desired. Harnessing the political will Many people from several quarters all over the world and especially the scientific community have relegated the debate and final blockade to utilization of GMO foods in the developing countries to the presence or absence of the "political will". In their book chapter "Does Africa Need political Will to Overcome Impediments to GM crop (Alhassan and Adekunle, 2014) reported that countries like Brazil, Argentina, India, and the Philippines developed the political will to engage. However, they make this assertion without defining what is the political will, how did these countries develop it, and how the other countries can develop it as well? The matter of relegating the challenge of biotechnology adoption in Africa and Kenya especially to political will needs a careful attention. The reason being that whereas at this stage the political will is needed to help adopt the GM crops, leaving such a decision to political will is dangerous where the politics takes over and runs amok and announces support for the adoption of a harmful or irrelevant technology. Anchoring anything such as GMO development on politics is one sure way to embed the technology on quick sand. The point ought to be the pursuit of political will to promote sound scientific evidence, strong and effective biosafety institutions. Since these are embedded in law, ensuring laws of the nation are respected and upheld irrespective of the political office bearer is a much better guarantee to the kind of investments undertaken to commercialize the technology, than can be offered by any political will. It is also very important that the political and policy makers look at the alternative scenarios. Failing to adopt GMOs or even to review the ban would mean continued status quo. In an honest evaluation we must realize that we are not just avoiding risk, since nothing is risk free, even continuing the ban exposes us to risk of some kind especially of very limited markets for imports in case of drought. The aim must be risk balancing. The overarching question may need to be where we would be if we continue what we are doing now? And whatever the scenario happens to be the country must then ask, can it afford that position? The answers countries get from these questions should provide the best impetus for driving adoption or continuation of the moratorium on GMO and GE crops. One of the areas that have been suggested is harnessing of the technology for addressing very pertinent and closer issues to the country in question. For example, the development of the Water Efficient Maize for Africa, Insect Resistant Maize for Africa, and other local crops being modified to provide for more beneficial effects, that can directly address the consumers" needs are more welcome (CYMMIT, 2018). This initiative has the advantage that it is being supported through philanthropic means and National parastatals which are public institutions. The sponsors of this project include Bill and Melinda Gates Foundation, Howard G. Buffett Foundation and the U.S. Agency for International Development (CYMMIT, 2018). On one hand, one cannot fail to realize that government officials of developing countries are overwhelmed by the weight of all the information either for or against the technology. This can be very disturbing especially for those in leadership. It seems that the scientific community has not done a good job of convincing the decision makers at the political level about the safety and attending benefits of GMO crops for the people and the economy of these nations. At the same time, government officials ought to carry out independent research, practice critical thinking which goes beyond what is said to why it is said, beyond who is offering the report to why the report. At least the relevant GMO technology should be selected for use in Kenya given that some of the foremost Biotech crops being fronted were grown in some countries decades ago and newer and superior GMO offerings are currently available in the developed countries. The question is whether these firstgeneration GMOs are the appropriate ones for Kenya or by adopting them, will we be repeating the mistakes that led to their being abandoned or up graded for farmers in the developing countries. In closing, going through the "Regulations and Guidelines for Biosafety in Biotechnology for Kenya", it is difficult to see where any issues of lack of safety should be raised if the process detailed in the documents were followed. It goes to prove that there is a tendency to bash the GMO crops without necessarily having had a chance to read through the regulations in place. But there is also a possibility that the level of trust that countries have on their governments, and attending institutions, have a direct bearing on their trust about GMO technology irrespective of the facts as outlined in science. Conclusions Decades of growing of GM crops have allowed millions of farmers around the world to not only increase productivity, but also have control of some serious insect pests, diseases and weeds in the fields, resulting in reduced use of pesticides, increased productivity, and hence profitability. Kenya has made slow but steady progress with GMO crops" commercialization with B. thuringiensis cotton being approved for field trials and later on released to farmers in 2020. There are other beneficial GM crops (Bt. Maize and Bacterial wilt Resistant Cassava) in the pipeline for NBA regulatory process and hopefully release into the environment in the future. In encouraging adoption of GMO crops, capacity building and taking into account the uniqueness of the agro-ecological conditions and farming systems of the country is vital. Kenya"s NBA, and relevant authorities, must ensure farmers" ability to afford seeds, and responsive regulation of GMOs which both are critical factors to bear in mind. Perhaps the lifting of the existing ban based on evaluation of available scientific evidence will provide a conducive environment for full exploitation of the biotechnology and encourage rigorous research to deal with any unfolding safety situation. With the bold step the government has taken of commercializing B. thuringiensis cotton technology in 2020, there can be no reason why the progressive adoption of the technology in Kenya cannot be realized. This will also allow the country to be better prepared to take advantage of available food grains for importation in the unfortunate incidence of drought resulting in hunger and starvation. CONFLICT OF INTERESTS The authors declare that they have no conflict of interest whatsoever.
v3-fos-license
2023-11-05T16:07:50.793Z
2023-11-03T00:00:00.000
265003846
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/eng2.12801", "pdf_hash": "da64ef7ee27c17cc368a92ed7e24b819540a1b51", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1274", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "88cde7dde0967bcf1073ebe8885148eaa12f7ac2", "year": 2024 }
pes2o/s2orc
Controller placement problem during SDN deployment in the ISP/Telco networks: A survey With the successful implementation of Software‐Defined Networking (SDN) in data center networking, the way forward for its deployment in the ISP/Telco network is becoming prominent. Small and medium‐sized networks may easily adopt SDN. The research on SDN deployment and implementation for a large‐scale network is continuing. This paper properly presents the current research status of Controller Placement Problem (CPP) and Multi‐CPP (MCPP) over SDN with their specific challenges and provides a comprehensive review of the major performance metrics, that is, latency, and controller load balancing techniques. This survey highlights the use of network partitioning‐based CPP and clustering approaches and their benefits in the context of SDN deployment. Moreover, this paper highlights the importance of implementing SDN and SDN security issues into ISP/Telco networks. Finally, we provide some key areas of ongoing research and discuss the future research direction regarding the various SDN‐based Controller Placement (CP) issues in the next‐generation IP and advanced networking technologies. INTRODUCTION Software-Defined Networking (SDN) is an extensively used network architecture.It isolates the control plane from the data plane to manage a whole network, enabling network scalability and programmability.The control plane and data plane are interconnected in traditional network architecture. 1,2li et al. 3 indicate that the Controller Placement Problem (CPP) is the process of determining the best places to put controllers and the best switches to go with those controllers in order to accomplish an objective.Researchers and network engineers have focused a lot of attention on one of the most challenging difficulties in SDN, the positioning of the controller. To achieve better performance, identifying the correct number of controllers and their best locations inside the network is the primary challenge in SDN.Over the years, a number of CPP techniques have been proposed, developed, and used to find the best solution; each has distinct goals, advantages, and drawbacks. 4 I G U R E 1 An overview of the paper's structure.latency and load balancing of SDN controllers.After that, in Sections 5 and 6, we go into SDN in ISP/Telco networks and SDN security considerations in ISP/Telco networks, respectively.In Section 7, we highlight discussions and future research directions and finally conclude the paper in Section 8.The overview of this survey paper is structured as shown in Figure 1. SOFTWARE-DEFINED NETWORKING SDN is a network management approach that makes use of programmable network configurations to enhance performance, lower the cost of network provisioning, and simplify network monitoring.Hence, SDN technology is a type of network management technique that enables a high level of programmability and centralized management.Seyedkolaei et al. in Reference 15, SDN's core idea is to create programmable networks through the division of the control plane and data plane in order to enhance network operation.The data plane, often referred to as the forwarding plane, is in charge of directing traffic in accordance with the decisions made by the controllers.The control plane has the responsibility of managing traffic on the network and establishing guidelines and regulations for for the data plane devices(s).It provides the data required for routing.SDN virtualizes network services, reducing hardware needs and enabling software and programs to manage data centers.SDN offers the programmability of network elements and the creation of new routing and forwarding technologies without the need for hardware upgrades. 16obo et al. 17 added that, Currently, the SDN paradigm is being adopted and used for a variety of computing models, including cloud, fog, and edge computing, as well as a variety of networking types, including corporate, mobile, and wireless.The SDN approach advances networking through innovation, simplification, and development.The SDN paradigm has shifted network intelligence from network devices to a central controller.Controllers are distributed out over a network to improve reliability, balance load, and eliminate single points of failure. Some benefits of SDN may include the following: network management and visibility; reduced hardware footprint and opex; simplified policy changes; and networking innovations.There are still some challenges with SDN, nevertheless. The SDN concept is being used and adopted by emerging architectures such as Network Function Virtualization, the Internet of Things, data centers, vehicular ad hoc networks, cloud computing, as well as more traditional networking sectors like sensor networks, optical networks, mobile networks, and wireless networks. 18oth scientific research and business are becoming more interested in SDN.With multiple companies, for example, Google, Facebook, Yahoo, and Microsoft, pushing and implementing SDN through open standards development, SDN has now had a significant impact on the industry.SDN is currently being used in a variety of network types, including enterprise and goverment networks, datacenter networks, wide-area networks (like Google B4), mobile networks and Internet exchange points, and so forth. 19he SDN paradigm is used in data centers, cellular providers, service providers, businesses, and residences.The majority of vendors are accepting SDN in order to develop products and services that will satisfy the demands of leading network owners and operators and offer novel concepts to the market.For example, Google, Verizon, NTT Communications, Deutsche Telekom, Microsoft, and Yahoo! claim that SDN will enable them to create networks that are simpler, easier to design and program, and easier to manage. 20Researchers used partners to deploy SDN technology over a three-year period at their campus and at a number of other campus sites across the country. SDN deployment and implementation for a large-scale network, however, remain unresolved research questions.Multiple network vendors have released OpenFlow-capable switches as products and described their SDN plans.Also, many of the largest networking equipment manufacturers, including Ericsson, Cisco, VMware, Huawe, NetApp, Alcatel-Lucent, HP, Cumulus Networks, Brocade and Juniper Networks, have already launched SDN projects. When choosing an SDN solution provider, companies must consider the following features: cloud compatibility, integration readiness, data and analytics, programmability, and support.A few SDN solutions in 2022 for the top companies have been discussed in Table 1 (https://www.spiceworks.com/tech/networking/articles/best-sdn-solutions/). Name Company overview Cisco Meraki For managing networks, Cisco Meraki provides software-defined solutions that address WAN, Wi-Fi, mobile device management, IoT, and other related domains. Cisco+ NaaS To implement, administer, and monitor our SDN environment, Cisco+ contains all the tools, technologies, and service support we require. Cradlepoint NetCloud 4G, 5G, and WAN-based wireless networking solutions are Cradlepoint's areas of expertise.Using cloud-based, software-defined technologies, Cradlepoint NetCloud assists in developing and integrating wireless network systems. Dell SDN Solution Since it launched in 2019, Dell SDN combines VMware's VeloCloud software with Dell's IT infrastructure to create an all-in-one solution. IBM SDN Services Along with having its own data centers and supplying NaaS, it focuses on SDN strategy, assessment, and consultancy. An overview of SDN architecture Cloud compatibility, integration readiness, data and analytics, programability, support, and so forth are important SDN aspects that are must-haves.Network programmability, logically centralizing intelligence and control, abstraction of the network, and openness are key areas in which SDN technology can benefit a business.The reference architecture for the generic SDN is shown in Figure 2. 21 It is made up of three layers: a control layer in the middle, an infrastructure layer at the bottom, and an application layer on top. Infrastructure layer: It is composed of forwarding components like switches, routers, and wireless access points, among others.These devices are responsible for collecting and sending packets in accordance with the controller's instructions.Each switch keeps one or more flow tables, each of which has entries that consist of the entry identification, action, and statistics fields.When a packet arrives, its header is compared to the entries in the flow table.If there is a match, the entry's stated action-such as forwarding the packet to a certain port or discarding it-is executed.If there is no match, the packet is sent to the controller for processing. Control layer: For managing the infrastructure components, it consists of one or more controllers.These controllers are responsible for keeping the entire picture of the network using information collected by the forwarding devices and also for forwarding rules to switches.The primary functions of the controller are mostly related to topology and traffic flow.Link discovery, topology management, decision-making, storage management, and flow management are the major components found inside the controller core.The controllers use a southbound interface, such as OpenFlow, for communicating to the forwarding devices.The east and west bound interfaces of the control layer are utilized for communication when there are multiple controllers as shown in Figure 2. The optimal selection of controllers is based on the controller features.Most popular open source SDN controllers in industry and academia includes the Open Network Operating System (ONOS), OpenDayLight (ODL), OpenKilda, Ryu and Faucet. 22he modularity of each controller is governed by the design focus and programming languages.The controller's portability and performance are affected by the programming language choice.For example, Python based controllers such as Ryu provide a well-defined API for developers to change the way components are managed and configured.Shah et al. 23 insists that Java is the best option, because it allows multithreading and is cross-platform.Python, however, has performance problems with multithreading.There are issues with memory management in C and C++.The .Net languages are runtime platform-dependent (Linux compatibility is not supported).Thus, authors in Reference 23 demonstrate that among different controllers (NOX, POX, Maestro, Floodlight, Ryu), Java-based Beacon has the greatest performance.Some openFlow (OF) controllers are Summarized in Table 2. The appropriate selection of an SDN controller is based on performance factors such as throughput, latency and CPU usage.Others may use the few common features of SDN controllers, including programming language, OF support, network programmability, northbound REST Application Programming Interface (API), Southbound Interface (SBI), efficiency (performance, reliability, scalability and secuirty), GUI, clustering, quantum API, productivity, partnership support, platform support and modularity. 24 Application layer: SDN applications use the controller's API to configure flows to route packets along the best route possible, load balance traffic, handle link failures, and divert traffic for auditing, authentication, inspection, and other similar security-related tasks.Thus, access control, firewalls, load balancing, network virtualization, security monitoring, and so forth are a few of the SDN applications.Because they are event-driven, SDN applications' responses might be as simple as a default action or as complicated as a dynamic reaction. Open interfaces: OpenFlow, ForCES, OpFlex, PCEP, and other southbound interfaces specify the communication protocol between data plane nodes and control plane components.Despite the fact that each has a different design objective, the command signals between the control plane and data plane are implemented using the SDN southbound protocols. OF is the most commonly used Southbound Interface (SBI) and is a de facto standard for industry.The fundamental responsibility of OF is to define flows and classify network traffic based on a predefined rule.Northbound interfaces like EST, OSGi, FML, Procera, and so forth provide an interface for application developers to create applications. CONTROLLER PLACEMENT PROBLEM The number and placement of controllers in the network affect a wide range of factors, such as performance metrics, availability, fault tolerance, and convergence time.Because of this, one of the issues that needs more attention is identifying the number and placement of controllers, known as CPP. 2 SDN is capable of resolving current network issues and enhancing network performance.Although there are other issues and opportunities for research in SDN, the CPP is considered to be the main issue that could have a significant effect on network performance. 25PP primarily relies on figuring out how many controllers are needed overall and where they should be placed to improve network performance.Through regular message exchanges between controllers and switches, all SDN operations are completed.Therefore, message exchanges are significantly impacted by the location of the controller(s), which has an impact on SDN performance. Research on CPP was first conducted by Heller et al. 26 They defined the issue as a facility placement problem and demonstrated that it was NP-hard.Since then, many initiatives have been made to position the controllers in the best possible way.By calculating the optimal location for the deployed controllers and the minimal number of controllers to be placed, the authors provided a principal objective to solve the Controller Placement Problem (CPP) by reducing the propagation delay (latency).Kumari et al. 27 focus primarily on the CPP solution approaches.The paper offers a formal definition of CPP and an in-depth review of the different performance indicators and characteristics of the available CPP solutions. In terms of CPP, Wang et al. 28 outlined a novel strategy to reduce propagation latency between Switch-Controller (SC), offered many lines of future study, highlighted some difficulties in the field, and opened relevant CPP questions.The need for an efficient algorithm, multi-objective optimization, cooperation among controllers, and cost awareness are a few of the issues described. Chen et al. 29 added that, the deployment model and performance optimization are the two key areas of interest in the research of CPP.Numerous network performance measures, including propagation delay, controller capacity, node failure, deployment cost, and energy savings, have been taken into account from the perspective of performance optimization. Abdel Rahman et al. 30 looked at the best placement in the scenario when switches and controllers are linked together via Wi-Fi networks.Hollinghurst et al. 31 presented a rigorous comparison of four widely utilized approaches to the controller placement problem.Ul Huque et al. 32 suggested a simple approach to satisfy the necessary SC latency in large scale networks while dynamically minimizing the number of provided controllers.Kesentini et al. 33 used a method based on the Nash Bargaining game to minimize SC delays, Controller-Controller (CC) delays, and controller load balancing all at once.Kobo et al. 17 thoroughly examined two issues: controller placement and controller mastership reelection.Researchers discussed and used the k-means model for local controller placement and the reoptimized k-means or k-center model for global controller placement. In order to maximize network reliability, controller load balance, and latency between controllers and switches, Zhang et al. 34 formulate the Multiobjective Optimization Controller Placement (MOCP) problem.They then determine the best locations for controller placement, the nodes that each controller can control within those locations, and the best way to distribute their routing requests among these controllers.The authors propose a mathematical model of this problem as the optimization objective function.The Adaptive Bacterial Foraging Optimization (ABFO) technique is designed in response to the computation complexity according to the actual network state in order to resolve this model. In general, CPP can be widely divided into two classes: capacitated controller placement problem (CCPP) and uncapacitated controller placement problem (UCPP). Capacitated CPP (CCPP) When placing controllers in CCPP, load and capacity are taken into account.If load and capacity are not considered, there is a possibility that the controller will fail.Some controllers could be excessively loaded, while others might be hardly loaded. Uncapacitated CPP (UCPP) Each SDN controller in UCPP has an unlimited capacity, and the load placed on the controllers is neglected. Getting the best location and satisfying results when solving CPP is challenging and involves meticulous planning and wise decision-making.In order to appropriately arrange many controllers, it is important to take into account the various factors that affect SDN performance.These CPP objectives and challenging factors are presented in Table 3. Das et al. 38 describe the CPP in SDN and emphasize its importance.In addition to the CPP's main use cases in data center networks and wide area networks, they also examine the CPP's most recent uses in a number of new fields, such as mobile/cellular networks, 5G, named data networks, wireless mesh networks, and VANETs.Regarding the CPP issues, they haven't paid enough attention to large ISP/Telco networks. General formulation of CPP The primary network components for an SDN-enabled network are switches, controllers, and the links that join them.Therefore, the network is often modeled as an undirected graph G = (S, E, C), where S represents the set of switches and E is the set of physical links among those switches or controllers, and C be the controllers to be placed in the network.The switches and the controllers are represented as follows: S = s1, s2, s3, … sn, where n denotes the number of switches in the network, and C=c1, c2, c3, … ck, where k denotes the number of controllers located in the network.Here, Pi = p1, p2, p3, … pk is one possible placement of the k controllers.The relationship between the switch and controller can be defined as follows: C ⊂ S. The d(s,c) is the shortest path between s ∈ Sn and c ∈ Ck. Comparative analysis of different SDN controller placement strategies Currently, a number of solutions have been developed and proposed to address SDN-based CPP.Existing solutions have been categorized in terms of controller allocation and the optimal number of controllers by minimizing network latency (propagation, controller processing), deployment cost, and energy consumption, as well as maximizing reliability and resilience. 4he findings illustrate the existence of multiple CPP strategies, including cluster-based, linear and quadratic programming (LP/QP)-based, evolutionary-based (bio-inspired, genetic algorithms), heuristic and greedy algorithm-based, simulated annealing-based, and so forth, while machine learning-based techniques are currently the subject of extensive research.Each of these strategies has a specific objective, a specific application, a certain weakness, and so on.A few different controller placement strategies for cluster-based large-scale networks are shown in the Table 4 as examples. Multiple controller placement problem The single centralized controller cannot keep up with the expanding network and application demand.The most likely scenario is a single-point bottleneck and failure.Since the controller is solely responsible for all control activities, any network failure would have a substantial impact on network performance.As a result, the multi-controller is a feasible alternative for SDN in large-scale networks, and it matters for investigating SDN control plane scalability. 48Since the location and number of controller deployments have a large impact on network performance in a multi-controller network architecture, MCPP has become an area of interest in current SDN research.Hu et al. 49 concluded that it is ineffective to deploy just one controller to manage a huge network.Placing multiple controllers is still challenging.In order to enhance the network's scalability and latency, it is most effective for big networks to use multiple controllers to manage the control plane traffic.There are two types of multi-controller architecture: flat architecture and hierarchical architecture. Multiple controllers are deployed in a large-scale SDN environment in order to minimize latency, the workload of the controller, or multiple network performance indicators in relation to controller placement.Consequently, an SDN controller can manage multiple Network Operating System (NOS). 26In a small network, such as the network in a data center, a controller is adequate to manage the switches.In order to increase the WANs' scalability and stability, multi-controller deployment is becoming popular. 50Dhar et al. 51 pay emphasis to the CPP issues with SDN and find that multiple controllers must be deployed in order to keep a large-sized SDN scalable and reliable. Holfed et al. 19 address new developments in architectural standards, protocols, and application designs to enable scalability in SDN.Selvi et al. 52 indicates that using a single controller causes the network's reliability and performance to decrease as the network's size grows.In order to manage and improve network availability, multiple controllers are used.The CPP for this setup is known as MCPP.The choice of metrics and the network topology themselves determine where to locate controllers. Wang et al. 53 suggested a multi-controller architecture.In this work, researchers used a distributed strategy to evenly transfer the load among multiple controllers.The proposed model employs a distributed method and multiple controls to address the reliability issue.When a controller fails unexpectedly, the system can be restored by diverting all communications to other controllers that are still functioning normally. Yuqi Fan et al. 54 present the Reliability-aware Controller Placement (RCP) algorithm, which divides the network into subnetworks and inserts a controller into each of them.Many controllers are efficiently positioned, and the connections between switches and controllers are defined using a local search technique.The simulation results show both the main and back systems' latency may be effectively reduced using the provided RCP technique. A common trend in SDN is the addition of multiple controllers, which addresses reliability problems, horizontal expansion, and performance bottlenecks.The position and quantity of controller deployments, however, have been discovered to have a significant impact on network performance in a multi-controller network design as SDN networks have advanced.Therefore, there is still a challenge to be solved regarding how to create an appropriate placement of the distributed control layer in the SDN network design. Scalability concerns in SDN SDN with multiple controllers refers to an SDN-enabled network that provides high performance, scalability, and availability, among other benefits.However, SDN suffers from a variety of performance and scalability issues, demanding major research efforts to maximize control plane scalability.Efficient resource allocation is in great demand and required for a scalable and adaptable network. 55here are many different ways of determining scalability, including: administrative, functional, geographical, load, and generational.There is now a lot of work being done to improve the control plane's scalability in SDN.One of the most common control plane scaling approaches is topology-based scalability, which investigates the relationship between network topology (e.g., number of controllers, locations, etc.) and scalability concerns. 56here are two types of topology-based controllers: centralized controllers and distributed controllers.Furthermore, the distribution has flat, hierarchical, and hybrid topologies. 36The distributed and hierarchical SDN controllers can partition the network into smaller domains and assign some control functions to local or regional controllers.This can minimize the load and latency of the central controller while also improving the scalability and performance of the SDN network. Domain partitioning concentrates on dividing the entire network into multiple SDN domains, whereas controller placement concentrates on locating the best places to increase scalability.The scalability of multiple controllers is increased by using multiple controller deployments to divide a network into different domains. 25Greater outcomes in terms of accessibility and scalability will be achieved by using multiple physically distributed but logically centered controllers across different network domains.However, this could lead to issues with load imbalances across several controllers.As a result, the Effective Initial Mapping in SDN (EIMSDN) method has been proposed to balance the load and reduce latency for managing the flow in the software-based network. 57SPs and data centers must constantly grow their operations to meet rising demand.ISP networks often feature fewer switches and routers than data center networks; nonetheless, nodes in such networks are typically geographically distributed.The huge diameter of these networks contributes to controller scaling issues, flow setup and state convergence latencies, and consistency requirements.Scaling while ensuring reliability and performance might be difficult. 58SPs struggle to continuously provide a high-quality service with the rise in connected devices and data-intensive applications.The ability of a system to accommodate an increase in usage, traffic, and data without significantly altering the infrastructure is referred to as scalable ISP/Telco infrastructure.The following are some challenges frequently encountered when creating and maintaining scalable ISP/Telco infrastructure: cloud dependencies, traffic peaks, data management, security, latency, cost, and maintenance.ISPs can get a lot of benefits by implementing an efficient telecommunications traffic management system, including improved network performance, a better user experience, reduced latency, enhanced security, and better resource allocation. Machine learning for the CPP in SDN Depending on the demands of the network administrator, the SDN paradigm can be used for a variety of tasks, including traffic engineering, network virtualization, and load balancing. 59It is easier to employ ML techniques because of the benefits of SDN, such as logically centralized control, a global perspective of the network, software-driven traffic study, and continuous updating of forwarding rules.ML has been used in a variety of applications in the specialized area of SDN, including traffic engineering, resource management, intrusion detection systems, and for other security purposes.Amin et al. 60 used ML methods based on supervised learning, unsupervised learning, and reinforcement learning, to optimize routing in SDN. The training, evaluation, and testing phases are the three main stages of machine learning.The training step is carried out using the training set.It is necessary to train the supervised learning model using the labeled dataset.It is not necessary to train the unsupervised model using the labeled dataset.Semi-supervised learning is based on a data set where some of it has labels and some of it has none. In order to address the CPP issues, Ramya et al. 61 suggests a traffic engineering method that makes use of machine learning's performance to anticipate controller numbers by analyzing and predicting controller traffic.The K-Means++ algorithm is used to determine the best places for controllers to be placed.The proposed method is simulated using Mininet and the proposed methodology performs better than the existing methodologies. Ramya et al. 62 draw the conclusion that in the future, a machine learning-based strategy may be employed for classifying and predicting the controllers' traffic for the same placement methodology, from which the dynamic controller allocation can be realized. From the perspectives of traffic classification, routing optimization, QoS/QoE prediction, resource management, and security, Xie et al. 63 explore how machine learning techniques are utilized in the context of SDN.By separating the control and data planes, software-defined networks SDN revolutionized network architecture and made networks simpler.However, machine learning (ML) and its derivatives have improved the intelligence of the systems.ML and SDN have recently been the focus of a lot of research studies. Faize et al. 64 raise up a few significant issues and potential directions for future ML for SDN research: reliable training datasets, a distributed multi-controller platform, enhanced network security, cross-layer network optimization, and a gradually deployed SDN.A wider range of areas, including edge computing, optical networks, the internet of things, vehicular networks, mobile networks, and wireless sensors, can benefit from the usage of ML for CPP in SDN. The existing controller placement strategies in a multi-controller SDN environment overlook a number of factors, including the path reliability for S2C and C2C connection and the adoption of an efficient ML algorithm. 65Researchers presented the Constrained Multi-Objective Heuristic Placement Approach (CMOHPA) technique based on the criteria of greatest S2C path reliability, limited S2C hop count, maximum C2C reliability, load-balancing between the controllers, and a limited number of controllers to find the most suitable position of controllers. Network partitioning-based CPP and clustering approach The features of WANs are long distance, high connection costs, and massive scale.WANs connect several local area networks or datacenters across geographically separated sites.Generally speaking, the objective of CPP in a datacenter is to obtain an optimal solution; however, in WANs, a heuristic method or network partition should be used to quickly narrow the search space and obtain a suboptimal, workable solution. 37ommonly, CPP is modeled as a network partitioning issue.To ensure that the network as a whole achieves the specified goals, it is important to partition the network into subnetworks and implement the objectives separately in each of them.Although adaptive CPP splits the entire network at regular intervals, clustering methods may be useful for deciding where to deploy the initial controllers.The clustering approach separates the network into domains, which makes figuring out how many controllers are needed easier. 66 CPP Solutions' Strategy: the network-partitioned-based CPP and clustering approach have potential benefits in the context of large WAN ISP/teleco networks.Recent findings state that the controller's response time is influenced by both the load on the controller and the S2C latency influence the controller's response time.The performance of SDN is influenced by reliability, load balancing, and response time between switch and controller.Partitioning becomes necessary if the controller is overloaded and there is significant propagation latency between the switches.Network partitioning could reduce the overall complexity of the large-sized network.The one school of thought regards CPP as a clustering problem in which a big network is divided into multiple small network domains, each of which is under the control of a single controller.67 Zhang et al. 68 consider a distributed SDN architecture that makes use of a cluster of controllers in order to increase network scalability and reliability.They assess the delay trade-offs for the CPP for a few real ISP network topologies and suggest an innovative adaptive technique that takes S2C and C2C latencies into consideration in order to find the relevant Pareto frontier. Fan et al. in Reference 44, the initial one controller placement concept fails to satisfy the demands of the actual network when SDN is gradually expanded to large networks like wide area networks.In order to apply SDN in large networks, many controllers must be used to create a distributed control layer. Yazici et al. 69 suggest a cluster-based distributed paradigm in which a master controller is chosen based on the network load such that, in the event of an increase in demand, the master node can be moved to a less burdened one. To choose suitable sites for the controllers, the network must be properly divided up into multiple clusters.However, the network partitioning results in a trade-off between a number of parameters, including load balancing, reliability, and latency. 5As a result, another complex challenge is splitting the network and locating the controller in each cluster. 70arge networks are usually separated into different domains to provide scalability, privacy, and security. 2][73] In order to reduce overall latency, Syed-Yusof et al. 74 suggests a multi-criteria clustering strategy that puts the controllers depending on predetermined constraint metrics between the controllers and the switches.The findings demonstrated that the suggested technique enhanced node distribution over the state-of-the-art options for the dense network anticipated in the 5G scenario.Without adding excessive latency, their method may considerably increase the scalability performance of SDN control networks. The network clustering-based Particle swarm optimization (PSO) strategy for the placement of controllers in SDN is recommended by Wang et al. 53,71 Gao et al. 75 developed a PSO approach, as an expanded article, to reduce the network's overall average latency.The authors divide the network into k clusters, and each cluster is managed and controlled by a controller.The load on the controllers, which is a crucial aspect for large-sized networks, is taken into account.Considerations include load balancing, switch-to-controller latency, inter-controller latency, and controller load.In comparison to the k-center and capacitated k-center strategies, the results demonstrate improved performance.While considering load balancing, the goal is to achieve maximum utilization by each controller. Liao et al. 42 introduce DBCP, a density-based switch clustering technique to partition the network into multiple smaller sub-networks.The size of each sub-network in DBCP can be determined by the deployed controller's capacity.The optimal number of controllers is also determined as a result of density-based clustering.Their experimental results show that DBCP gives greater performance compared to state-of-the-art approaches in terms of time consumption, propagation latency, and fault tolerance. In Reference 29, Chen et al. provides a strategy called community detection controller deployment (CDCD).There are two components to the controller deployment.The first is network partitioning, often known as community detection, and the second is controller position selection.In order to divide the large-scale SDN network into many sub-networks with community properties, the Louvain heuristic algorithm (LHA) is used. Qi et al. 76 suggested a network partition methods based on the modified density peaks clustering (MDPC) algorithm, which clusters the switches to generate multiple sub-networks out of the overall network, in order to reduce the average propagation latency between switches and controllers.The controller would remain at the original switch location during the division process but wouldn't exist independently without switches.To locate the controller in the center of the sub-network and lower average latency, researchers employed average degree and closeness centrality.Their controller placement method effectively cut down on latency.In particular, the average latency can be decreased by 10% when compared to optimized Kmeans and 35% when compared to K-means. Chen et al. 77 proposed an improved Density-based Controller Placement Algorithm (DCPA) that can partition the entire network into several sub-networks after exploring possible values of radius to find the necessary number of controllers.The controllers are placed in each sub-network to reduce both average and the worst-case propagation latency between the controllers and switches.Testing the algorithm's performance on 100 actual network topologies from the Internet Topology Zoo, their results showed that DCPA can always identify the controller placement strategy with a small time overhead to lower propagation latency for various network scales, with a margin of error of less than 10% from the optimal situation. CPP and M-CPP algorithms: K-means/K-Means++ One approach for resolving the network partitioning-based CPP problem is the clustering methodology.K-Means Clustering, an unsupervised learning approach, is used to handle clustering issues in machine learning or data science.The primary drawback of K-means is that it initially assumes centroids at random and attempts to form clusters.We employ the K-Means++ technique to prevent it. 61-Means++ guarantees a more intelligent initialization of the network's centroids and raises the standard of clustering.The clustering centers in this algorithm are chosen from the nodes that are far apart from one another.The rest of the method, except initialization, is identical to the typical k-means approach for locating controllers in a network's best places.K-Means++ then combines the basic K-Means algorithm with a smarter centroid initialization. 78nitialization algorithm: 1. Randomly select the first centroid (clustering center) from the existing data points (X). 2. For each data point, compute its distance D(X) from the nearest, previously chosen centroid. 3. Select the next centroid from the data points such that the probability of choosing a point as a centroid is directly proportional to its distance from the nearest, previously chosen centroid (i.e., the point having the maximum distance (D(X)) has a higher probability of being selected next as a centroid).4. Repeat steps 2 and 3 until the required number of controller positions, that is, k centroids, are initialized. Finding the optimal location After initialization of k centroids, the remaining are the same as K-means. 5. Assign data points Xi to the nearest cluster Cj: In this stage, we will first use the Euclidean Distance metric to determine the distance between data point X and centroid C.After that, select the cluster for the data points where the distance between the point and the centroid is the shortest.6. Re-initialize centroids: Next, we will re-initialize the centroids by calculating the average of all data points in that cluster.7. Repeat steps 5 and 6: We will keep going back and performing steps 3 and 4 until we have the best centroids and the assignments of the data points to the right clusters are stable. Wang et al. 79 presented a Clustering-based Network Partition Algorithm (CNPA) to address the network partition problem and eliminate the drawbacks of clustering algorithms such as K-means and K-center.The end-to-end latency and the queuing latency of controllers combine to make up the total latency.The CNPA can guarantee that each partition can shorten the maximum end-to-end latency between controllers and switches.To further decrease the queuing latency of controllers, appropriate multiple controllers are then placed in the sub-networks. PERFORMANCE METRICS IN SDN Numerous qualitative performance measures and techniques have been offered in the literature to address the controller placement problem in SDN.Propagation latency, reliability, load distribution, and failure resilience metrics are enhanced by effective controller placement.A few important SDN Performance metrics are mentioned in Table 5.In this survey, we discuss two performance metrics: latency, the primary performance parameter used in CPP, and controller load balancing.The bulk of research to date has found that network latency between controllers and switches and from controller to controller is the most relevant issue.Because it has a significant impact on SDN's overall performance.A proper placement of controllers should reduce latency, and each controller shouldn't be overloaded.With an appropriate load-balancing strategy, response time and packet loss ratio could be efficiently decreased, resource usage could be improved, and overload could be avoided.Additionally, it might increase the network's capacity for expansion, reliability, packet delivery efficiency, and durability. 82t is essential to have a common benchmarking methodology for evaluation because there are multiple controllers available with varying architectures and properties.Zhu et al. 83 provide a thorough review of benchmarking approaches and tools for SDN controllers.The software tool used for benchmarking SDN controllers must be exceedingly efficient and precise.The most popular benchmarking tools are CBench, PktBlaster, OFNet, HCprobe, OFCBenchmark, WCBench, and OFCProbe. Latency The amount of time it takes for data to move from one point to another between sender and receiver or between a specific user action and the response is known as latency.Common latency problems can be worth looking into.A low-latency network is one that has short transmission delays, which is desirable; a high-latency network, on the other hand, has greater transmission delays and is less desirable.In the context of WANs and SDN, reducing network device communication latency is a critical challenge that requires the optimal controller location.Javadpour et al. 84 highlighted that in SDN networks, the quantity of controllers and their placement can have an impact on two metrics: reliability and latency. Since latency has an impact on ISP/Telco network performance, it has significance and needs to be thoroughly studied.High-latency networks that experience lengthy delays impede communication.It's true that we all desire communication with as little latency as possible.However, a network's typical latency varies slightly depending on the context, and latency problems change from network to network.Data packets are continuously processed and routed through various network channels made of wires, optical fiber cables, or wireless transmission mediums by network equipment including routers, modems, and switches.As a result, network operations are intricate, and a number of factors influence the rate at which data packets move.The following are common causes of network latency: medium of transmission, distance traveled by network traffic, network hops count, data amount, server performance, user issues and Physical issues or inadequate hardware. Latency or delay is one of the most often used performance indicators.Transmission, propagation, queuing, and processing delay make up the total latency.It is possible to evaluate latency between two nodes in one of two ways: switch to controller latency (also known as node to controller latency) or controller to controller latency.In CPP, latency may be either switch-to-switch (SS), SC, CC, or total latency of network. Heller et al. 26 begin a study on controller placement in SDN and suggest that propagation latency-both average and worst-case propagation latency-is the primary factor to be taken into account.This issue is conceptualized as a facility location issue, and K-center is used to solve it. Selvi et al. 52 employ several qualitative metrics, including throughput, utilization, and latency.The amount of time needed to forward a packet through a network is called latency.There are several different types of latency, including communication latency and traffic delivery latency. Wang et al. 79 identify that a critical difficulty in SDN is selecting suitable locations for controllers to reduce the latency between controllers and switches.The CPP described a few of the performance factors that were taken into consideration, including control plane overhead, latency, load imbalance, cost, and connection.They use the controller-to-node latency (propagation, queuing, and processing delay) as a crucial performance parameter. Sapkota et al. 85 propose a novel population-based meta-heuristic method, Naked Mole-Rat (NMR) Algorithm, to position controllers on the site more effectively based on SC, CC, and load-balancing among the controllers.Two widely accessible standard topologies, Ernet and Savvis, are used to demonstrate the concepts and methods.When compared to the Bat method, the NMR algorithm's controller localization approach yields somewhat superior results. Mamushiane et al. 66 extended and used a facility location approach known as Partition Around Medoids (PAM) with propagation latency to determine the optimum places to put SDN controllers.This study suggested using the Silhouette and Gap Statistics algorithms to decide how many controllers to deploy in a wide-area network.As a research study, they looked at the South African National Research Network (SANReN). Rasol et al. 86 assesses the Joint Latency and Reliability-aware Controller Placemen (LRCP) optimization model.With the help of alternate backup channels, LRCP gives network administrators a variety of options for balancing the reliability and latency trade-offs between controllers and switches.This study suggests the Control Plan Latency (CPL) metric, the sum of average switch-to-controller latency and the average inter-controller latency, in order to evaluate the controller placements offered by LRCP and determine how effective they are in an actual controller deployment. In each link failure state, Fan et al. 44 further take into account the number of control path reroutings and the worst-case latency between the controller and the switch.To solve the problem, they offer a heuristic approach based on particle swarm optimization.The suggested algorithm's usefulness is demonstrated by the numerical results.Additionally, it demonstrates that in the majority of link failure conditions, the suggested technique may ensure the latency and reliability of the control layer. Yuqi Fan et al. 54 present the RCP algorithm.The objective of this approach is to to minimize the average latency between all switches and the appropriate controllers in the event of a single broken link.The latency of each path is made up of the latency of the primary path plus the average of any potential backup paths that might be available in the event of a single link failure. Chen et al. 29 present that the network is separated into many sub-networks, and the essential performance metric is the latency between the controller and switch.The Equation 1 by Liao et al. in Reference 42 can be used to determine the latency model in this study. where d ijk (s, c) is the dijkstra shortest path distance, corresponds to the latency from the switch s to the associated controller c, c ∈ (c) denotes that controller c is placed at the position of one of the switches, |(c)| is the number of switches controlled by c.Also, Equations ( 2) and (3) for average and worst latency calculations are presented by Lu et al. 37 Where L avg and L wst are the average and the worst latency between switches and controllers and d(s, c i )|s ∈ s i is the distance from the node s to the controller c i of its subdomain.Similarly the average controller to controller latency can be formulated in Equation (4) as, Where L cc−avg is the average latency between the two controllers and K is the number of controllers.Similarly, processing latency increases significantly as the controller's load exceeds or reaches its processing capability.As a result, the load on the controllers is often balanced in order to reduce processing latency.Equation ( 5) illustrates the The Table 6 contains a few examples of research about different propagation and processing latency contexts in WAN applications. In Reference 2, discussion largely focuses on controller placement strategies that take into account optimization goals including latency, connectivity, cost, load, energy, QoS, and control plane overhead, or a combination of these objectives.A mathematical model for controller placement that reduces the worst-case latency in the event of controller failures was proposed by Killi and Rao. 93Their model avoids a substantial increase in latency brought on by single connection failures.In the event of a single link failure, they also suggested a mathematical model to reduce the total worst-case latency and maximum worst-case latency. For a reliable CPP, Singh et al. 94 suggest a Varna Based Optimization (VBO) to guarantee that it reduces the overall average latency.Their results demonstrate that the proposed VBO algorithm outperforms other effective heuristic algorithms for the reliability-aware CPP (RCPP), such the PSO. 75PSO and Teacher Learning-Based Optimization (TLBO) and their experimental results show that TLBO performs better than PSO for publicly accessible topologies. 95ang et al. 96 optimized the average latency and the worst latency utilizing the data plane traffic demands in their article, which concentrated on the CPP in the SDN multi-controller architecture.The traffic gravitation for the subdomain division problem was identified, and an enhanced label propagation algorithm (LPA) was created.On the other hand, using a heuristic approach based on open searching and the gravitational force of nodes toward the controller, they determined the best controller placement position in each subdomain.Their experiment proved that the placement algorithm is more effective at minimizing the average latency and the worst latency with a fewer controllers. Dhar et al. 97 proposed a mathematical approach called the "$-method" to form the clusters and put one controller in each cluster to reduce the worst-case SC latency.In terms of worst-case SC latency minimization with fewer controllers, the "$-method" outperforms other current techniques.By assigning the switches from a failed controller to the controllers closest to it, they have also examined the failure mode of a controller, which demonstrates that it also performs better in terms of network fault tolerance and improves network resilience. During the real-time migration of an existing legacy network into a Software Defined IPv6 Network (SoDIP6) network, the optimal path routing and BFR approach are used to determine the best location for the SDN control plane.In order to decide where to locate the controller using Breadth-First router Replacement (BFR), Dawadi et al. 98 analyzed the optimal latency. In Reference 79, the most important factor is network latency between controllers and switches because it has a significant impact on SDN's overall performance.The end-to-end latency and the controller queuing latency are two more potential sources of latency that they research and examine in this work.A CNPA is then presented to partition the network in order to reduce end-to-end latency.The maximum end-to-end latency between controllers and switches can be reduced by each partition, according to the CNPA.The necessary number of controllers are subsequently installed in the subnetworks to further reduce the controllers' queuing delay.By deploying controllers in an SDN-enabled wide-area network, this study aims to decrease the maximum latency between controllers and switches. An exemplary Problem formulation by Wang et al. 79 is described briefly.The Haversine formula in equation 6 is used to compute the great circle distances between pair of switches (or any two nodes) in Veness et al. 99 and Wang et al. 100 and the shortest path distance is calculated by using the Dijkstras algorithm in Skiena et al. 101 Distance = 2(r) × arcsin )) Where, is latitude of a node, is longitude of a node and r is radius of the earth.The end-to-end latency and queuing latency are calculated by Equations ( 7) and ( 8), respectively.End-to-end latency for a packet traveling from switch Sm to controller Cn is: Here, packet transmission latency (DT i ) = P i ∕B i , where P i is the amount of bits of a packet in link i and B i is the bandwidth of the link.The propagation delay is given by DP i = d i ∕S, where d i is the distance of link i and S is the signal speed at which data travels through the medium.The switch processing latency is denoted to D spi , which is affected by the load of switch i. Queuing latency: Here m stands for the system's server/controller count.All controllers are thought to have the same service rate in order to make the analysis simpler.Then the traffic intensity is termed as ( = /m).Total Latency: Objective formulation: (10) such that s m , c n ∈ SDN i (∀i ∈ k) Assuming the requirement of latency is denoted by Tth, the maximum total latency is expected to be less than Tth.Due to differing geographic distributions, each subnetwork has a significantly varying density of switches.The subnetworks with a lot of switches could have a lot of queuing latency.As a result, multiple controllers are installed inside subnetworks that contain a lot of switches. Controller load balancing SDN controller plays an important role to enable load balancing in distributed systems by optimizing resource allocation, minimizing response time, and maximizing throughput of that system.Since the SDN controller has the potential to provide an extensive overview of the available resources, using multiple load-balancing techniques in SDN networks can improve network performance.Alhilali et al. 102 classifies AI-based LB strategies into four main categories: nature-inspired techniques, machine learning, mathematical models, and other LB techniques.They provided comprehensive details on different metrics applied by several LB methods used to assess the performance of load balancing in SDN.Some of these metrics include response time, throughput, resource utilization, latency, workload degree, deployment cost, jitter, packet loss ratio, delay, round-trip time, bandwidth utilization ratio, migration delay, link utilization, flow completion time, migration cost, overhead, packet load ratio, power consumption, and cumulative distribution function. Network Partitioning-based MCPP implementation approaches over SDN with load balancing for large size networks are summarized in Table 7. SDN load balancing has also brought forward the concept of multiple-controller SDN.A controller often experiences higher load as the number of nodes linked to it increases.The network has additional delays as a result of queuing at the controller system caused by the rise in node-to-controller requests.Nodes of various controllers must be balanced in order to maintain resilience for controller placement. Additionally, suitable load balancing helps in maximizing scalability, minimizing response time, maximizing throughput, minimizing resource consumption, avoiding overload of any single resource and so on.Neghabi et al. 80 point out that though load balancing techniques are important for SDN, there isn't yet a full and comprehensive systematic investigation in this field.They present a survey of the existing mechanisms and compare their properties, then describe specific common load balancing techniques in the SDN and identify the types of difficulties that would be taken into consideration. An SDN-based load balancer physically separates the network control plane from the forwarding plane.More than one device is able to be controlled at the same time when load balancing using SDN.There may be instances where some controllers are overloaded because of excessive traffic flow if the nodes are assigned to the nearest controller, utilizing latency as a metric or the shortest path distance between the node and controller.The number of nodes per controller in the network may be unbalanced. 52n order to prevent overload on any of the resources, load balancing is a strategy that distributes the workload across multiple resources. 108Some of the load balancing objectives include increasing throughput, reducing response time, and improving traffic. 109SDN load balancing techniques are more precise and perform better.Due to commercial concerns, load balancing is one of the most crucial topics in SDN-related research. 110Dynamic load adjustment is a technique for distributing the workload among the current hosts by executing virtual machine migration and analyzing workload performance at various time intervals. 111sing two strategies, controller clustering, which targets the dynamic controller, and switch migration, which manages the load distribution between controllers to maintain load, Hu et al. 49 present a complete review that explains the multi-controller concept and highlights the role of load balancing. Using a new controller state synchronization approach and load variance-based synchronization (LVS), Zehua et al. 112 have addressed the looping and controller synchronization issues to improve load-balancing performance in multi-controller architecture. By using a poly-stable matching algorithm, the authors Killi and Rao in Reference 113 aimed to decrease the maximum load imbalance among controllers. The controller capacity is impacted by increasing the number of SDN switches during migration.A suitable strategy is required to find the controller and install more controllers in the network for load balancing in order to handle the issue of increased control traffic with the rise of SDN switches. 98n effective load balancing technique for numerous controllers in SDN is created by the research in Zhong et al. 114 A SCPLBS on SDN distributed controllers is presented.The time that controllers use to collect and publish loads can be changed adaptively.If the loads on the controllers are higher than the threshold, the switch with the highest loads is transferred to the controller with the lowest loads.The study in Zhou et al. 115 suggests a DALB.The aim of this algorithm is to balance the load on each controller.The switch with the highest load is relocated if its loads go over the threshold.DALB reduces the overhead of the controllers by using a threshold for load collection.Adaptive threshold adjustments can be made for each controller based on the loads that are handled by it. According to Konglar et al., 103 a distributed controller architecture, however, does not ensure that the overload issue can be fully resolved.In order to reduce the loads on the overloaded controllers, this work suggests a LDOP.They propose a controller manager that is only responsible for transferring switches from an overloaded controller to other controllers.The LDOP method can also lower the overall number of loads from all controllers that exceed the threshold.Switches are moved based on preventing switch migration from making a chosen controller an overloaded controller.The LDOP algorithm can operate when all controllers are overloaded. In their research, Babbar et al. 105 have presented a latency-based load balancing solution for multiple SDN controllers.Their suggested technique resolves the load-balancing issues with multiple overloaded controllers in the SDN control plane by determining the necessary latency and addressing multiple overloads concurrently.Along with the migration, their algorithm's latency has decreased by 25% when compared to other algorithms. In Reference 68, the communication with the switches is handled by multiple controllers in a distributed architecture.As a result, each controller experiences a reduction in processing load as a result of the distribution of control traffic between switches and controllers, which has a positive load balancing effect. The load balancing approach for distributed controllers in SDN presented in Reference 104 is dynamic and adaptable and is based on a hierarchical control plane.They have put forward a useful load balancing mechanism for distributed controllers that may dynamically migrate the load via switch migration from the heavily loaded controller to the lighter loaded one in accordance with the load distribution of the control plane.According to the simulation results, the suggested technique can dynamically balance the load on the control plane and boost the throughput of distributed controllers. Wang et al. 53 presented a load adjustment technique to be applied to each controller in order to achieve load balance among multiple controllers.In SDN systems, Three logical parts make up the suggested mechanism: a load collector, load balancer, and switch migrater.The experiment results demonstrated the effectiveness of the data gathering process and the adjustment of global and local loading.To simulate different loads for each switch, Cbench was used to generate traffic. Based on the traffic pattern that divides the traffic into TCP and UDP, Gasmelseed et al. 107 suggests a new load-balancing method for SDN.In order to get beyond the centralized controller's management, scalability, and availability constraints, this study uses a distributed controller design.According to this study, the suggested algorithm outperforms random, round robin, and weighted round robin in terms of availability, response time, transaction rate, throughput, concurrency, and packet loss.To prevent the single point of failure problem and to achieve fault tolerance, the proposed algorithm employs a failover mechanism.The algorithm also demonstrates its effectiveness in a dynamic network with heterogeneous traffic that demands high availability, low response times, high transaction rates, high throughput, minimal concurrency, and minimal packet loss. From their simulation results, Lin et al. 116 concluded that the proposed robust controller placement heuristic algorithm with an integer linear programming (RCP-ILP) enhances the robustness, the efficiency and load balancing of SDNs at the cost of least controllers placement against links failure. Yang et al. 117 presented Simulated Annealing Partition-based K-Means (SAPKM), a low-complexity controller placement algorithm for SDWAN, to provide load balancing among distributed controllers.Through the simultaneous improvement of propagation delay and network reliability performance, experimental findings showed the efficacy of SAPKM in lowering average load and load balancing indices. For a typical ISP, Al et al. 14 describe a SDN load-distribution method.To do this, researchers have created an optimization technique using linear programming.For this, a multi-objective model has been created.Among the numerous objectives are to increase link availability, maximize link utilization, and minimize the typical round-trip time for the most popular websites. SDN IN ISP/TELCO NETWORKS Dawadi et al. 13 compared the network performance of the traditional network with that of the SDN network for Internet Protocol (IP) routing in order to assess the viability of the SDN deployment in the ISP/Telco network.Researchers discovered that SDN-IP performs better in terms of bandwidth and latency.The experimental investigation of interoperability between SDN and traditional networks demonstrates that SDN implementation in a carrier-grade ISP network at the production level is practical and forward-thinking.Also, they claimed that their experimental work would motivate service providers to effectively convert their traditional networks to SDN.Zhang et al. 68 evaluated the delay trade-off of the controller placement problem for several real ISP network topologies and proposed a new evolutionary algorithm to find the corresponding Pareto frontier.Additionally, researchers develop a simple model to estimate the response time perceived by the switch.This was accurately validated with a working Software Defined WAN (SDWAN).They also developed new approximation algorithms that formulate optimization problems to minimize response time and evaluate their performance against optimal solvers for real ISP network topologies. In order to convert traditional IPv4 networks to multi-domain SoDIP6 networks and investigate the viability of joint network migration in ISP networks, this 98 study implements the SDN-IP and ONOS SDN controller.Researchers present findings from thorough simulations for the best location of the master ONOS controller during network migration by considering the least amount of control path latency utilizing optimal path routing and BFR technique. In this article, 98 by utilizing the ONOS/SDN-IP platform for network migration, researchers take note of the migration of shortest-path routers and find the optimal location for controller installation with its instance generation for control traffic load balancing through the selection of median point router. Dawadi et al. 98 presented that, due to the lack of highly developed SDN-based standards and other important considerations that must be taken into account while real-time migrating existing legacy IPv4 networks, SDN deployment across ISP/Telco networks is a difficult problem.Numerous migration strategies have been researched, but none of them appear to be close to being put into practice. Due to the increase in traffic demand, ISP must strengthen the current network infrastructure in order to handle peak traffic loads and respond to situations that impair network performance, such as link failures or router problems. 118he controllers will only handle the SDN-capable nodes during incremental SDN deployments, and each ISP must be extremely selective about which nodes to upgrade and when to do so.Poularakis et al. 119 studied the issue of when and which nodes in an ISP network should undergo SDN upgrades.Researchers concentrated on two common objectives for ISPs: (i) increasing the amount of programmable traffic that travels through at least one SDN-enabled node; and (ii) increasing the flexibility of traffic engineering, or the number of additional paths made available to flows by SDN upgrades.As part of their research, they also look at the dual upgrading problem, which involves ensuring certain goals for performance while minimizing the cost of upgrading for the ISP. Vissicchio et al. in Reference 10, it is nearly impossible to upgrade a full ISP network in one go with SDN because it would be extremely operational burden and would increase performance and security threats.ISPs are anticipated to choose to migrate to SDN incrementally, that is, by progressively upgrading their network nodes over a number of years.The controllers will only handle the SDN-enabled nodes in these incremental SDN deployments; the rest of the traditional network will continue to use OSPF-like routing protocols. Kong et al. 120 evaluate real-world ISP traffic using OFSim, and their results indicate that: (i) Current OpenFlow switch implementations cannot handle real-world ISP traffic; and (ii) Although there is a controller scalability issue, the performance bottleneck may actually be in the current OpenFlow switches, where the flow table entry installation delay is a more urgent issue despite the controller scalability issue. Moradi et al. 121 introduce Dragon, a unique traffic engineering (TE) application framework in large ISP networks for the SDN control plane, to address the scalability issue.They demonstrate that Dragon outperforms existing TE approaches in terms of speed and optimality by doing rigorous evaluations on real topologies and prototyping with SDN controllers and switches. Also, SDN will play a significant role in the future telecommunications landscape.The most significant application scenarios for SDN are (1) virtualization of mobile core networks, (2) virtualization of content delivery networks, and (3) virtual network platform as a service (VNPaaS).Telcos from all around the world are the main players in SDN business and services.This is so because the telecom sector recognizes the significance of the effective network an SDN provides. By enhancing the quality of service at network nodes that are close to Internet of Vehicles (IoV)s, the Vehicular ad-hoc networks (VANET) can be made more effective. 122esearch trends in the joint paradigm of 5G and SDN include adaptive clustering in SDN-enabled 5G, SDN-based advance channel allocation in 5G SDN-based networks, joint optimization for reduced power consumption, and SDN-based 5G IoV architecture.Transmission Latency, energy efficiency, scalability, mobility and routing, interoperability, and security make up the majority of the research topics. 123he next-generation advanced network must be elastic, dynamically adaptable to user needs, and flexible.Mobile edge computing for SDN-based wireless networks, 124 Mobile Edge Computing using Deep Reinforcement Learning, 125 5G-Slicing-Enabled Scalable SDN Core Network, 126 Adaptive clustering SDN-enabled 5G VANET, 127 hybrid SDN-based distributed Cloud architecture having the control plane distributed over different controller types (i.e., fog, edge, SDN), 128 deploying SDN OpenFlow with the 5G mobile Network, 129 security improvement in SDN-based 5G network, 130 and 5G security and emerging concepts are a few current research issues in joint SDN and 5G environments. SDN SECURITY CONSIDERATIONS IN ISP/TELCO NETWORKS There are several difficulties with SDN implementation for ISP/Telco networks that must be taken into account and resolved.Despite of all its features and functions, security of SDN is still considered to be a major concern.The SDN Infrastructure is vulnerable to various security threats.Application layer, control layer, and infrastructure layer attacks are some of the more frequent threats against SDN.Every SDN component must be secured in order to maintain a secure SDN environment. 131here are various SDN security attacks: hardware Trojan attacks, replication attacks, malicious code attacks, eavesdropping attacks, spoofing attacks, sybil attacks, distributed denial of service attacks (DoS), and man-in-the-middle. 132ne of the most common of these is DDoS attacks.The attacks are more likely to succeed in an SDN context because of the centralized controller and the ease with which flooding can cause disruption.One common security intrusion tactic used by attackers to prevent authorized users from accessing a targeted host or other network resources is the DDoS attack.They are divided into three categories: application-based attacks, protocol-based attacks, and volume-based attacks. 133o keep up with the constant inflow of new threats without compromising network performance or even harming the consumer experience, thus service providers require a cutting-edge strategy and toolkit.The SDN controller must provide techniques for fault tolerance, recovery, and backup and protect the network from malicious attacks, errors, and failures.The centralized and programmable nature of the SDN controller exposes security to new dangers and weaknesses.ISP/Telco must implement and integrate a variety of reliable and secure techniques and tools, including encryption, authentication, authorization, auditing, logging, monitoring, testing, verification, and debugging, in order to address some of the issues raised by SDN.These can help to protect the integrity, availability, and resilience of the SDN network by preventing, detecting, and mitigating any risks and damages (https://www.linkedin.com/advice/3/whatbenefits-challenges-implementing-sdn-traffic). Application security, API security, controller security, DDoS, infrastructure security, and so forth are all supported by secure SDN architecture. 134It's important to take into account dynamic variations in application demand as well. 135he IoT, autonomous vehicles, telemedicine, and smart cities are a few examples of emerging technologies that have always attracted attackers.The security threats against the supported services must therefore be addressed by 5G networks. 136 DISCUSSIONS AND FUTURE RESEARCH DIRECTIONS In this survey, we discussed first the CPP, MCPP, and performance metrics in SDN.We then emphasize the significance of load balancing when choosing the multi-controllers.There are still a number of challenges that need to be researched.38]51 But none of them have particularly addressed ISP/Telco networks.Therefore, research must be confined to CPP issues in large-scale ISP/Telco networks. The majority of the research approaches do not take into account dynamic load balancing and prospective ISP/Telco failure scenarios.A load balancing method for ISP/Telco is therefore quite promising.Additionally, the majority of the reviewed methods omitted the load detection/prediction approach.The presentation of a new technique to perform load prediction using ML and AI is thus another area for future development.In addition to the technical difficulties associated with the performance and scalability of SDN solutions in WANs, ISPs must make a significant investment to switch from their present legacy hardware deployment to SDN solutions. Furthermore, The placement of controllers based on long-term and short-term planning and employing traffic prediction are currently unresolved research questions in SDN.The CPP and MCPP in the network-based partitioned with load-balancing approach can also be evaluated for controller placement in a variety of scenarios in 5G and beyond networks.As a result, further research should be done to find solutions to these issues accordingly. More research needs to be done on effective traffic balancing solutions.We encourage future researchers to pursue the areas of the aforementioned topic in order to create novel methods for MCPP with load balancing for SDN in WAN. Regarding the controller placement (CP) issues in the context of next-generation IP and advanced networking technologies with SDN support, we identify the following potential research topics: 1. Controller security issue in the joint paradigm of SDN and 5G: One of the most crucial issues is the controller's security.In the near future, 5G and SDN will promote mobile communication through the creation of cutting-edge applications such as smart cities, cutting-edge military security, intelligent traffic, etc.In the future, it will be important to solve the security concerns that arise when integrating SDN with 5G.For instance, the centralized controller in SDN may introduce new threats to network flows, requiring the implementation of a strong security framework.As a result, researchers should think carefully about high-level security for controller placement techniques. CP issue in the SD-IoT networks: A massive, intelligent, efficient, secure, reasonable, and scalable IoT must be developed to manage the rapidly increasing number of devices.SDN has become an essential platform for implementing Internet of Things (IoT) services due to the high benefits it offers by separating the network control plane from the data plane.In contrast, distributed and dynamic IoT networks cannot be solved effectively by placing a static SDN controller.Therefore, a intelligent way for controller placement and their load balancing approach in SDN-IoT network must be researched.3. CP issue in the SDN enabled 5G satellite: The location of SDN controllers should be such that they can minimize the effects of any failure or inefficiency in the nodes or links while continuing to improve average control path reliability and lowering controller-to-gateway latency.Due to the changing network topology in 5G satellite networks, the controller location issue is definitely significant and needs to be looked into.4. CP issue in the SDN-based distributed cloud, fog, and edge (5G-enabled and beyond) To meet different performance needs, SDN controllers can be deployed in various deployment techniques, such as centrally or distributed.When the controllers support both IP and SDN switches, the design becomes more complicated.Scalability, consistency, reliability, and security levels vary according to these varying designs.For the control plane to be correctly extend over all the different controller types (such as fog, edge, and cloud), a special kind of controller design is needed.In order to efficiently use the scarce fog resources, the controller needs to be knowledgeable and intelligent.The efficient study of controller connectivity and collaboration approaches among all controllers must be addressed. CP issue in the SDN-based IoV: Controller placement is a key challenge for achieving SDN's stability and flexibility to changes in network state.It is challenging for the IoV to offer reliable and scalable wireless network services for emerging applications in the 5G and beyond future due to flow fluctuations in the highly dynamic IoV.An intelligent and efficient ML-based technique must be studied for the optimal positioning of SDN controllers at the edge of networks for IoV, considering different optimization problems like latency and load balancing.Further, placement driven by a machine learning-based prediction of road traffic, taking more placement metrics in sequence, and decreasing inter-controller synchronization overhead could be the research area to meet the CP issues in an Software Defined Vehicular Networks (SDVN) context.6. CP issue in the Software defined wireless sensor networking (SWSN): Multiple SDN controller nodes will be needed to manage the configuration of large-scale WSNs.The use of multiple SDN controllers to create a physically distributed SDN is a common way to enhance performance, increase scalability, and improve fault tolerance.Deploying a controller is still a difficult problem in SWSN because it is based on the network's size and requirements.Thus, Identifying the optimal location of SDN controllers and multiple controller deployments is the research issue with SDN-enabled WSN. 7. CP issue in SDN-Optical Networks (SDON): With their increased bandwidth capacity, optical networks assist the telecommunications industry globally in a number of ways, including quicker data speeds, greater transmission range, and reduced latency.To achieve balance and the best SDON performance, the proper placement of SDON controller components should be addressed in future. CONCLUSION There is the major state-of-the-art review on MCPP in SDN for data centres and few are found in ISP/Telco networks, as far as we are aware.With the intention of establishing the context for SDN, we have first provided a quick overview of SDN and its architecture in this paper.We talked about load balancing techniques and research approaches related to CPP and MCPP.To increase the effectiveness of SDN, controller placement must be done optimally.Multiple controllers are necessary for large-scale SDN in order to assure scalability and stability.At present, the majority of researchers are working on controller placement problem for data centers.However, they do not offer a particularly efficient approach for the large-sized ISP/Telco networks with multi-controllers dynamic traffic load and fault.The efficient and effective choice of numbers and placement of SDN controllers is a challenge that might be solved through further research in this field.Thus, the future research is required to increase the availability, performance of the network and making the network more reliable. After the completion of this survey, the following points are concluded and highlighted: 1.There are challenges, issues, and opportunities for controller placement in the SDN network over a large-scale ISP/Telco.These need to be identified and addressed.2. The best strategy for placing multiple (newly added) SDN controllers in a large-scale ISP/Telco networks must be determined.3. Load statistics for multiple controllers are to be measured, and still, an intelligent load distribution strategy must be developed.4. Researchers need to focus on optimizing the number and locations of controllers in order to create balanced controller load distributions in the network.5. Modeling SDN security and researching the effects of CPP optimization are both necessary for future work. Generic layered view of SDN. TA B L E 2Controllers support a number of Northbound Interfaces (NBIs), but most of them are based on REST APIs.To improve controller interoperability, standard east-west interfaces like ALTO, Hyperflow, etc. define several network controller communication protocols. Factors of CPP. TA B L E 3 TA B L E 4 SDN controller placement strategies. SDN performance metrics.Degree of load balancing, Throughput, Utilization, Execution time, Peak load Ratio, Response time, Overhead, Root mean Square Error, Packet loss rate, Percentage of matched dead line flow, Energy consumption, Migration cost, Forwarding centric, Guaranteed bit rate, Overload ratio, average number of synchronization per minute, Workload and TA B L E 5 Different latency approaches. TA B L E 6 Network partitioning-based MCPP implementation approaches in different state-of-the-art. TA B L E 7
v3-fos-license
2020-07-30T02:03:21.747Z
2020-07-25T00:00:00.000
225432720
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://arpgweb.com/pdf-files/jssr6(7)720-726.pdf", "pdf_hash": "873db16a3c7194946b3b484c16cd3afd3a3016c3", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1277", "s2fieldsofstudy": [ "Education" ], "sha1": "84b1283187936973153329cdda0d2513fc2bcb39", "year": 2020 }
pes2o/s2orc
Teachers’ Professional Ethics and Classroom Management as a Correlate of Students’ Academic Performance in Public Secondary Schools in Abia State, Nigeria The study examined teachers’ professional ethics and classroom management as a correlate of students’ academic performance in public secondary schools in Abia State, Nigeria. Two research questions and two null hypotheses guided the study. The study adopted a Correlational Research Design. The population of the study consisted of 9,200 Secondary School Students in Public Secondary Schools. The study sampled 920 students representing 10% of the populations using Stratified Random Sampling Technique. The instrument for data collection was structured questionnaire titled: “Teachers’ Professional Ethics and Classroom Management of Students Academic performance (TPECMSAP)”. The instrument was validated by three experts. The instrument was tested using t-test method and calculated with Pearson’s Product Moment Correlation which yielded an index of 0.71 for teachers’ professional ethics and 0.89 for classroom management. Data collected was analyzed using mean and standard deviation to answer the research questions. Pearson’s r, R2 (coefficient of determination) and multiple regression analysis was used to test the null hypotheses at 0.05 levels of significance. The findings of the study revealed that there is a significant relationship between teachers’ professional ethics, classroom management and students academic performance. The findings of the study revealed that employment of qualified teachers and other professionals’ demands for practices to ensure high academic performance. Based on the findings, it was recommended among others that government and the school administrators should organize seminars, workshops and conferences to create more awareness on teachers’ ethics and classroom managements on academic performance of students in Nigeria. Introduction Academic performance of students in any country often depends on the quality of education been giving to the child and how the child has achieved their educational goals. Therefore, when academic performance of student is low, the educational system is said to be low as well Moswela and Gobagoba (2014). Therefore, academic performance can be defined as the end results in teaching and learning, the level at which the learner or the instructor has accomplished their learning aims and objectives. Dimpka (2015), perceived academic performance as the outcome of education, the extent at which a student, the teacher or the institution has achieved their educational set goals. Habitually, from the above definition, the researchers opined that in order to improve the quality of education, quantitative data such as time management, school location, conducive environment, qualified teachers, instructional materials, and infrastructure facilities are expected to be available and accessible in the learning environment. Nonetheless, issues like teachers' professional disposition, conduct and ethics of teaching profession are often time neglected. Therefore, it is suspected that teachers conduct and ethical inclination has a direct link with students' academic performance in school and also lack of adherence to professional ethics by teacher may therefore, impact negatively on the students' learning behaviour. Hence, Jaques (2003) refers lack of adherence to teaching ethics as a violation of the ethical requirements of the teaching profession. Ethics are standards that make an action right or wrong. It is those standards that influences behaviour and allow individual to make choices. On the other hand, teaching ethics are those standards that help to categorize different values such as integrity, discipline and honesty among others, that is apply in teaching and learning process. To Dienye (2012), teaching ethics are those rules and regulations which guide the conduct of teacher with morality. Dimpka (2015), opined that teaching ethics are organized system of standards for behaviour and practice of members of teaching professions. Operationally, teaching ethics can affect the image of individual's teacher; image of the school; image of teaching profession and image of the country at large. Teachers' professional ethics is concern with the norms, values and principles that should govern the teaching professional conducts. To Anangisye (2011), teachers' professional ethics is a form of service which requires teacher's expert in knowledge and specialized skills acquired and maintained through rigorous and continuing study. However, teachers' professional ethics play a crucial role in educational achievement because teachers are ultimately responsible for translating educational policies and principles into actions based on practice during interaction with the students (Afe, 2001). Similarly, teachers' professional ethics requires basic management skills and ability to understand the nature of the profession. According to Carrie and Ellen (2003) teachers' professional ethics include teacher's knowledge of the subject matter, ability to communicate, emotional stability, good human relationship and interest in the job. Esmaeili et al. (2016) posited that meaningful correlation exist between professional ethics, using evaluation method to ascertain the relationship between university professors and academic progress of students in five subcategories, such like teaching, research, manners, humane relations and organization. Similarly, Fehintola (2014) assert that employment of teachers who are qualified and abreast of their professional demands and practices ensure high academic performance of students as well go a long way to reduce examination malpractice during their classroom examination. Classroom management in this study involves curtailing learners' disruptive behaviour such as fighting, noise making, close observation, arrangement of classroom learning materials and response to students who suffer from poor sight, poor reading, poor writing and poor spelling habit (Morse, 2012). Similarly, Nicholas (2007) opined classroom management as the process to incorporate every element of classroom from lesson delivery to classroom environment. Scholar like Umoren (2010) posited that the concept of classroom management is broader than the notion of students control and discipline which include creating organized classroom, establishing expectations, inducing students' cooperation in learning tasks and dealing with procedural demands of the classroom. In essence Bassey (2012) opined that classroom management shows increased arrangement, reduction of inappropriate and disruptive behaviour, and promotion of students' responsibility for academic work and improved academic performance of students. Ndiyo (2011), posited that among the factors that influence students' academic performance is the teacher's efficiency in classroom management which stands as the most important factors. Furthermore, Baker (2000) also opined that effective management techniques supports and facilitates teaching and learning and in so doing enhances students academic performance. Operationally, classroom management is the process that includes all teachers' activities that is expected in the classroom to foster students, academic involvement and cooperation in the classroom management. Nonetheless, despite all studies and remedies to academic performance of secondary school students in public secondary school, thus far, their performance still continued to decline in the areas of their academic performance. Therefore, this study sought to find out the correlation between teachers professional ethics and classroom management towards academic performance of students in public secondary schools in Abia State, Nigeria. Statement of the Problem The quality of education both in teaching and learning depends on the teachers. For the past years, student's academic performance both internal and external examinations is been used to evaluate the teachers methods of teaching. Thus, effective teacher has been conceptualized as one who produces desirable results in the course of his duty as a teacher. Therefore, the issue of poor academic performance of students in Nigeria has been much concern to all and sundry. The problem has led to a wide acclaim fallen in the standard of education in Abia State in particular and Nigeria at large. Despite government huge investment in public secondary schools, nevertheless, the output is still rate poor when compare with the students' academic performance in both internal and external examinations. Furthermore, the increase nature of poor academic performance of students in public secondary schools in external examinations like West Africa Examination Council (WAEC), Joint Admissions and Matriculation Board (JAMB), and National Examination Council (NECO), tends to shift the blame to the teachers as not been qualified or as a result of poor examination conductions before or during the classroom management. Based on the above challenges such as irregularity of the teachers in the school, poor attitude towards the students, unqualified teacher and absenteeism of the teachers which has contributed the fast declining on academic performance of students that the problem of this study, therefore, put in a question form is: What is the correlation between teachers professional ethics and classroom management towards academic performance of students in public secondary schools in Abia State. Purpose of the Study The general purpose of this study is to determine teachers' professional ethics and classroom management as a correlate of students' academic performance in public secondary schools in Abia State, Nigeria. Specifically, the study sought to: 1. determine the correlation of teachers' professional ethics and students' academic performance in public secondary schools. 2. determine the correlation of classroom management and students' academic performance in public secondary schools. Research Questions The following research questions guided the study: 1. What is the correlation of teachers' professional ethics with students' academic performance in public secondary schools? 2. What is the correlation of classroom management with students' academic performance in public secondary schools? Hypotheses The following null hypotheses tested at 0.05 levels of significance guided the study: 1. There is no significant different between the correlation of teachers' professional ethics with students' academic performance in public secondary schools. 2. There is no significant different between correlation of classroom management with students' academic performance in public Secondary Schools. Methods The study adopted a Correlational Research Design. The study was carried out in Public Secondary Schools in Abia State, Nigeria. The Population of the study was 9, 200 SS2 students in the three Education Zones in public secondary schools in Abia State. The study sampled 920 senior students representing 10% of the populations using Stratified Random Sampling Technique. The instrument for data collection was structured questionnaire titled: "Teachers' Professional Ethics and Classroom Management of Students Academic performance (TPECMSAP)". The instrument was validated by three experts, two from the Department of Educational Foundations, and one from the Department of Science Education (Measurement and Evaluation), all from the Faculty of Education, University of Nigeria, Nsukka. The instrument was grouped into Two (2) clusters of twelve (12) items. Cluster A was on Teachers' Professional Ethics and Students Academic Performance. Cluster B was on Classroom Management and Students Academic Performance. The items were structured on four scale rating points such as Strongly Agree (4points), Agree (3-points), Disagree (2-points) and Strongly Disagree (1-point). The instrument was tested using t-test and calculated with Pearson's Product Moment Correlation which yielded an index of 0.71 for teachers' professional ethics and 0.89 obtained. Data collected were analyzed using mean and standard deviation. Pearson's r, R 2 (coefficient of determination) and multiple regression analysis was used to test the null hypotheses at 0.05 levels of significance. The hypotheses of no significant difference will not be rejected if the F or t-calculated value is less than the t-table value at 0.05 level of significance and appropriate degree of freedom and rejected if otherwise. Result Research Question One: What is the correlation of teachers' professional ethics with students' academic performance in public secondary schools? .800 .799 Data in table 1 indicate a positive relationship between teachers' professional ethics with students' academic performance in public secondary schools. This shows that the calculated r of .98 and the calculated R 2 of .80 indicate teachers' professional ethics to an extent correlates with students' academic performance in public secondary schools. Data in table 2 shows that the predictive index of teachers' professional ethics is .89. That means the data suggested that classroom environment had .89% contribution in predicting the students' academic performance in public secondary schools. Research Question Two: What is the correlation of classroom management with students' academic performance in public secondary schools? Data in table 3 indicate a positive relationship between classroom management with students' academic performance in public secondary schools. This shows that the calculated r of .87 and the calculated R 2 of .76 indicate that classroom management to an extent correlates with students' academic performance in public secondary schools. 4 show that the predictive index of classroom management is .87. The data suggested that classroom management had .87% contribution in predicting the students' academic performance in public secondary schools. Hypotheses Hypothesis One: There is no significant different between the correlation of teachers' professional ethics with students' academic performance in public secondary schools. 5 shows that teachers' professional ethics is a significant correlate of academic performance in public secondary schools. This is shows that the F-value of 3660.916 has a probability value of .000 with significant at 0.05 levels. Therefore, the null hypothesis which states that there is no significant different between the correlations of teachers' professional ethics with students' academic performance in public secondary schools is rejected. Consequently, there is significant correlation professional ethics with students' academic performance in public secondary schools. Hypothesis Two: There is no significant different between correlation of classroom management with students' academic performance in public Secondary Schools. 6 shows that classroom management is a significant correlate of academic performance in public secondary schools. This table shows the F-value of 3012.886 which has a probability value of .000 and significant at 0.05 levels. Therefore, the null hypothesis which states that there is no significant different between correlations of classroom management with students' academic performance in public Secondary Schools is rejected. Consequently, there is significant correlation between classroom management and students' academic performance in public secondary schools. Discussion The findings of the study revealed that teachers' professional ethics has a relationship with academic performance with secondary school students as teachers' professional ethics had 89 percent contribution in predicting students' academic performance in public secondary schools. There was therefore significant correlation between teachers' professional ethics and students' academic performance in public secondary schools. The finding of the study is line with the findings of Esmaeili et al. (2016) who posited evaluated the relationship between professional ethics of university professors and academic progress of students. The findings of the study also revealed that meaningful correlation exist between professional ethics of in five subcategories (teaching, research, manners, humane relations and organization) with academic progress of students. Furthermore, the finding also is in line with the study of Fehintola (2014) who posited that employment of teachers who are well qualified and abreast of their professions demands and practices ensure high academic performance of students and go a long way to reduce examination malpractice in the country. The findings of the study also revealed that classroom management has a relationship with academic performance of secondary school students as classroom management had 87% contribution in predicting students' academic performance in public secondary schools. Therefore, there was significant correlation between classroom management and students' academic performance in public secondary schools. The findings of the study is consonance with the study of Ndiyo (2011) who posited that students' academic performance and teachers efficiency in classroom management stands as one of the most important factors that can influence students performance in both internal and external examinations. The findings is also in agreement with the study of Baker (2000) who opined that effective management techniques, supports, teaching facilities and conducive learning environment enhances students academic performance. Conclusion/Recommendations This study examined teachers' professional ethics and classroom management as a correlate of students' academic performance in public secondary schools. It started with introduction on the major variables. The concept of academic performance, ethic, teacher's professional ethic and classroom management were discussed. Hence, employments of qualified teachers who embrace their teaching professions by practice go a long way to motivate and enhance high academic performance of public secondary schools students and the society at large. Based on findings of the study, the following recommendations were made: 1. Government, school administrators and religious groups should organize seminars, workshops and conferences to create more awareness on the relationship between teacher's ethics, classroom managements and academic performance of the students. 2. Teachers should be discouraged from engaging in negative practices such like sexual relationship with their students, unfairly treating, lack of positive attention to the students and other activities that will hinder student's academic performance. 3. There is need for regular professional training on relevant knowledge expected to improve the teacher's teaching behaviour toward classroom management. 4. Teachers should establish rules and regulation in classroom against disruptive behaviour and also ensure they are disciplined to lead by examples because no one give what he does not have.
v3-fos-license
2022-11-24T05:25:41.171Z
0001-01-01T00:00:00.000
269768026
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://academic.oup.com/eurpub/article-pdf/32/Supplement_3/ckac131.020/48594990/ckac131.020.pdf", "pdf_hash": "4926febeb3fda632d5b1d6a16824653edcbce0dc", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1279", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "2c70041cca58ea93834cb3820d87e5f3b1388dc4", "year": 2022 }
pes2o/s2orc
Factors associated with hospitalization for aortic stenosis in Portugal from 2015 to 2017 Key issues for implementation of Genomics in Healthcare: a Policy Brief on timely reporting to follow up the project results. QUESTIONS ANSWERED. support to micro-planning be effective in enhancing vaccination numbers? Can monetary incentives to personnel based on performances enhance vaccination numbers? Can a setting approach (school, workplaces,..) enhance vaccinations numbers? RESULTS the numbers of vaccination Increased from an average of 5 per team by day in early December (after the refresh training) up to an average of 15 by day after the support to micro-planning, the monetary progressive incentives based on performance and the introduction of settings approach. Up to 85.000 doses was performed in 4 months. Klebsiella pneumoniae and Escherichia coli. The UV-C device consist of a protective dome with a reflective coating, a UV-C lamp (placed in the device base) and three reflective holders. Different positions and exposure times were tested using two different carriers holder for the bacterial inoculum (plastic and stainless steel) to estimate the germicidal efficency related to UV-C lamp exposure, with direct and reflected (from the dome coating) light. Results: The experiment showed that the higher bacterial inactivation effect (3.5 to 7 log10) was achieved for all four strains at 3minutes, but even at 1minute, there is a marked reduction in the bacterial load with the only exception of Klebsiella pneumoniae. After 45 and 30seconds, steel carriers contami-nated by Escherichia coli and Staphylococcus aureus on the opposite side of the UV-C source showed significant reduc-tions in the range between 99 and 99,9%. Conclusions: The device has proven to be effective for the disinfection of various everyday objects placed into the lamp and introduces beauty to the household environment. Background: Severe aortic stenosis has been and a public health challenge. The gold-standard Surgical Aortic Valve Replacement (SAVR) however Transcatheter Aortic Valve Implantation (TAVI) increasing, in high-risk surgical identifying the factors associated to the imple- mentation of to to curtail men’s against women (VAW) Sweden, one in three women have experience physical/sexual VAW. Promoting anti-VAW masculinities among young men is a key intervention to reduce VAW; yet little is known about what actions could be used to effectively do so in Sweden. This study aims to: 1. Identify actions that young people (men and women), and stakeholders believe can be used to promote anti-VAW masculinities and 2. Quantify the relationship, coherence and patterns of importance and applicability between the different identified actions. Multidimensional maps. in terms of importance and applicability. and spheres. gender VAW key Abstract citation ckac131.024 Background: Improved efficiency is one overall goal in WHO’s Health Systems Framework. Efficiency is an important dimension of health system performance assessment (HSPA). HSPA is used as a tool to monitor and evaluate the performance of health systems and to support evidence-based policymaking. In the pilot study for a first German HSPA, efficiency was assessed as one dimension. Methods: Indicators were selected based on a systematic search of established instruments in national and international HSPA initiatives. Criteria for the inclusion of indicators were data availability and international comparability. Where possible, indicators were evaluated in terms of their development over time (2000-2020), in comparison to eight European countries (e.g., Austria, Denmark, France), and regarding equity aspects (e.g., age, gender, region). Results: Eight indicators to assess the efficiency of the German health system were identified and analysed accordingly. They cover the pharmaceutical sector, outpatient and inpatient care, and system-wide efficiency. Trend analyses were possible for all indicators, and most were also suitable for international comparisons. Overall, results of the chosen indicators indicate a moderate health system efficiency. The volume of generics as share of all pharmaceuticals, e.g., was 83% in Germany in 2019 (country average: 54%) and has been steadily increasing since 2000. In contrast, expenses for pharmaceuticals overall rose from 1.4% of GDP in 2004 to 1.7% in 2019, whereas it declined from 1.3% to 1.1% on average in the other countries. Conclusions: Within this first pilot study, a systematic and comparative German HSPA measuring the efficiency of the German health system using eight predefined indicators was proven to be Background: Severe aortic stenosis prevalence has been growing worldwide and constitutes a public health challenge.The gold-standard treatment is Surgical Aortic Valve Replacement (SAVR) however Transcatheter Aortic Valve Implantation (TAVI) has been increasing, especially in high-risk surgical patients.This study aims identifying the factors associated to the implementation of TAVI to minimize possible disparities in access to health services. Methods: This study used data on inpatient discharges from the Portuguese NHS, from 2015 to 2017.SAVR and TAVI, were classified according to the International Classification of Diseases (ICD).Chi-square test and independent T-tests with 1% significance level in the SPSS Õ were performed to identify the factors associated with both interventions. Conclusions: TAVI was performed in more severe patients and there was an increase in TAVI over the years, which is consistent with the growing use of the technology among other patients, e.g., the high-risk surgical patients.We also found a geographic pattern in the use of SAVR and TAVI.This might reveal the existence of geographic disparities regarding availability and access to health services. Key messages: In Portugal, there is an increase in the performance of TAVI, with geographical concentration that reflects on access.TAVI is more often performed in more severe patients as an alternative to SAVR with similar discharge outcomes. Background: Despite policies aiming to curtail men's violence against women (VAW) in Sweden, one in three women have experience physical/sexual VAW.Promoting anti-VAW masculinities among young men is a key intervention to reduce VAW; yet little is known about what actions could be used to effectively do so in Sweden.This study aims to: 1. Identify actions that young people (men and women), and stakeholders believe can be used to promote anti-VAW masculinities and 2. Quantify the relationship, coherence and patterns of importance and applicability between the different identified actions. Methods: A mixed-methods study was conducted in Stockholm in 2019. In-depth interviews with young people aged 18-24 years (men = 16, women = 12) and stakeholders (n = 12) were used to identify actions to promote anti-VAW masculinities.Then, an online survey with 83 people (77 young people) was conducted asking participants to sort the actions and rate them in terms of importance and applicability.Multidimensional scaling and hierarchical cluster analysis were used to create clusters maps.Each cluster was rated in terms of importance and applicability. Results: Six clusters were identified: 1.own self-reflection and change, 2. actions in leisure-cultural spaces, 3. mandatory education on gender-VAW, 4. positive role models in public arenas, 5. support civil society and 6. strengthen government, police, and legal response.The clusters of mandatory education on gender-VAW and own self-reflection and change were rated higher in importance (mean 5.1 and 4.8 respectively).Mandatory education on gender-VAW and actions in leisure-cultural spaces were rated higher in applicability (mean 4.6 and 4.7 respectively).Correlation between importance and applicability was low (rho = 0.16). Conclusions: Promoting anti-VAW masculinities to tackle VAW should be done in multiple arenas.Mandatory education on gender- Background: Improved efficiency is one overall goal in WHO's Health Systems Framework.Efficiency is an important dimension of health system performance assessment (HSPA).HSPA is used as a tool to monitor and evaluate the performance of health systems and to support evidence-based policymaking.In the pilot study for a first German HSPA, efficiency was assessed as one dimension.Methods: Indicators were selected based on a systematic search of established instruments in national and international HSPA initiatives.Criteria for the inclusion of indicators were data availability and international comparability.Where possible, indicators were evaluated in terms of their development over time , in comparison to eight European countries (e.g., Austria, Denmark, France), and regarding equity aspects (e.g., age, gender, region). Results: Eight indicators to assess the efficiency of the German health system were identified and analysed accordingly.They cover the pharmaceutical sector, outpatient and inpatient care, and system-wide efficiency.Trend analyses were possible for all indicators, and most were also suitable for international comparisons.Overall, results of the chosen indicators indicate a moderate health system efficiency.The volume of generics as share of all pharmaceuticals, e.g., was 83% in Germany in 2019 (country average: 54%) and has been steadily increasing since 2000.In contrast, expenses for pharmaceuticals overall rose from 1.4% of GDP in 2004 to 1.7% in 2019, whereas it declined from 1.3% to 1.1% on average in the other countries. Conclusions: Within this first pilot study, a systematic and comparative German HSPA measuring the efficiency of the German health system using eight predefined indicators was proven to be European Public Health Conference 2022countries are currently at variable maturity stages regarding the implementation of genomic medicine (GM) in healthcare, hindering the equitable delivery of personalized medicine to citizens across borders.Description of the problem:The European 1+Million Genomes Initiative (1+MG) aims to provide cross-border access to quality genomic information and related clinical data, to advance data-driven research and HC solutions to benefit citizens.This initiative is encouraging countries to develop national GM strategies, but guidance for successful implementation is needed.In this context, the Beyond 1 Million Genomes, a supporting action to the 1+MG initiative, organized three Country Exchange Visits (CEV) to discuss critical issues, share experiences and best practices, for the implementation of sustainable GM strategies in healthcare. /problem: Healthcare (HC) can significantly benefit from genomic information for earlier, accurate diagnosis, effective personalized treatment with less adverse events, and accurate profiling of individuals for disease prevention.However, European 15th
v3-fos-license
2017-12-07T10:30:17.276Z
2017-12-06T00:00:00.000
5833680
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-017-0647-5", "pdf_hash": "84572767f9d4ad1a0c2611028fb04588578f80dc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1280", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "31b0d7381fd04fbbc1cbe4350c140c6ad2bd2fed", "year": 2017 }
pes2o/s2orc
Synergistic antitumor interaction between valproic acid, capecitabine and radiotherapy in colorectal cancer: critical role of p53 Background Recurrence with distant metastases has become the predominant pattern of failure in locally advanced rectal cancer (LARC), thus the integration of new antineoplastic agents into preoperative fluoropyrimidine-based chemo-radiotherapy represents a clinical challenge to implement an intensified therapeutic strategy. The present study examined the combination of the histone deacetylase inhibitor (HDACi) valproic acid (VPA) with fluoropyrimidine-based chemo-radiotherapy on colorectal cancer (CRC) cells. Methods HCT-116 (p53-wild type), HCT-116 p53−/− (p53-null), SW620 and HT29 (p53-mutant) CRC cell lines were used to assess the antitumor interaction between VPA and capecitabine metabolite 5′-deoxy-5-fluorouridine (5′-DFUR) in combination with radiotherapy and to evaluate the role of p53 in the combination treatment. Effects on proliferation, clonogenicity and apoptosis were evaluated, along with γH2AX foci formation as an indicator for DNA damage. Results Combined treatment with equipotent doses of VPA and 5′-DFUR resulted in synergistic effects in CRC lines expressing p53 (wild-type or mutant). In HCT-116 p53−/− cells we observed antagonist effects. Radiotherapy further potentiated the antiproliferative, pro-apoptotic and DNA damage effects induced by 5′-DFUR/VPA combination in p53 expressing cells. Conclusions These results highlighted the role of VPA as valuable candidate to be added to preoperative chemo-radiotherapy in LARC. On these bases we launched the ongoing phase I/II study of VPA and short-course radiotherapy plus capecitabine as preoperative treatment in low-moderate risk rectal cancer (V-shoRT-R3). Electronic supplementary material The online version of this article (10.1186/s13046-017-0647-5) contains supplementary material, which is available to authorized users. Background Colorectal cancer (CRC) is the third most common cancer in males and females with an estimated worldwide annual incidence of 1.3 million [1,2] and with rectal cancer representing a 30% of its totality [3]. The management of rectal cancer varies somewhat from that of colon cancer because of the increased risk of local recurrence and a poorer overall prognosis. Preoperative fluoropyrimidine-based chemo-radiotherapy followed by surgery is the preferred treatment option for patients with stages II and III rectal disease [4,5]. However, rectal cancer is a heterogeneous group of tumors, where different types of treatments, depending on stages and progression, are available. Although the introduction of total mesorectal excision and preoperative radiotherapy (RT) have been revolutionary and resulted in improved local control after curative resection for rectal cancer, local relapses and distant metastasis still occur and remain a cause of recurrence worldwide [6]. This is particularly true for the "high risk" locally advanced rectal cancer (LARC) patients, also defined as the "ugly" subgroup [3]. Therefore several strategies have attempted to improve local control and reduce distant recurrence adding new cytotoxic agents into the standard treatment strategy, but this is still an ongoing challenging process [3]. Histone deacetylase inhibitors (HDACi) are an emerging group of agents that target histone deacetylase influencing chromatin structure, which in turn regulates gene expression. Radiosensitization by HDACi has been demonstrated in multiple preclinical and clinical studies [7][8][9][10]. Moreover HDACi can also modulate cellular functions independent of gene expression by acting on non-histone proteins deacetylation, in this way being involved in the regulation of different altered pathway in cancer, such as apoptosis, cell cycle and DNA repair. Valproic acid (VPA) is an anti-epileptic drug with HDAC inhibitory activity, characterized by a much better safety profile compared to other HDACi, with neovestibular symptoms, fatigue and somnolence as the only doselimiting toxicities [11]. VPA is also considered a less potent HDACi and this could be probably associated to its minor toxicity. For these reasons and due to its safe use as chronic therapy in epileptic disorders, VPA represents a good candidate to be tested in combination therapy development in cancer patients. A good tolerability and encouraging tumor responses of VPA in combination with chemotherapy were observed in phase I/II trials in various solid tumors, including CRC [12][13][14][15][16]. We have previously demonstrated that HDACi, including VPA, synergize with fluoropyrimidines, in vitro and in vivo preclinical models of breast and CRC cancer by downregulating thymidylate synthase (TS), the key enzyme in the mechanism of action of 5-Fluorouracil (5-FU) and by upregulating thymidine phosphorylase (TP), the key enzyme converting capecitabine to 5-FU [17][18][19]. TS is an essential enzyme for the de novo synthesis of thymidylate and subsequently DNA synthesis and it is a critical target for 5-FU. High levels of TS expression have been correlated with poorer overall patient survival in several tumors and resistance to 5-FU [20]. Thus, while increasing the conversion of capecitabine to 5-FU, through TP modulation, HDACi down-regulate TS, 5-FU final target, enhancing its antitumor activity. Preclinical radiosensitization activity of VPA has been also demonstrated [9,10]. In the present study, we examined for the first time the effect of VPA in combination with fluoropyrimidines and RT on human CRC cell lines. Since p53 signaling is frequently dysregulated in CRC and the loss of a complete functional p53 is often associated with resistance to current therapies and poor prognosis, we also investigated the role of p53 in the combination setting, taking advantage of four cellular models: the HCT116 p53-wild type (wt) and its p53-null subline HCT-116 p53 −/− , and the HT29 and SW620 p53-mutant (mut) cell lines. Materials VPA was purchased from Enzo Life Sciences (Farmingdale) while 5′-deoxy-5-fluorouridine (5′-DFUR) from Sigma-Aldrich. Stock solutions were prepared in sterile water and diluted to appropriate concentrations in culture medium before addition to the cells. All media, serum, antibiotics, and glutamine were from Lonza (Verviers). Cell culture and cell proliferation assay HT29 and SW620 cell lines were from American Type Culture Collections (Rockville, MD, USA), while HCT-116 and HCT-116 p53 −/− were kindly provided by Prof. G. Russo (University Federico II, Naples, Italy). All cell lines were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% heat-inactivated foetal bovine serum, 50 units/mL penicillin, 500 μg/mL streptomycin, and 4 mmol/L glutamine. All cell lines were cultivated at 37°C in a humidified 5% CO 2 atmosphere, regularly inspected to be free of mycoplasma with the Mycoalert Mycoplasma Detection Kit (Lonza). Cells have been authenticated with a short tandem repeat profile generated by LGC Standards (Middlesex). Cell survival/proliferation was performed by a spectrophotometric dye incorporation assay using sulforhodamine B (SRB, ICN Biomedicals) in quadruplicate in 96-well plates, after 96 h from treatment, as described before [17]. All in vitro studies in cancer cells were here performed with capecitabine-metabolite 5′-DFUR, which requires the presence of TP to be converted into the active 5-FU drug. Capecitabine, being a prodrug, needs a first catabolic step of conversion due to the Carboxyl esterase activity, which enzyme has low level expression in most cancer cell lines, as previously described [18]. In vitro drugs combination studies Drug interaction was evaluated by the Chou-Talalay method, based on concentration-effect curves generated as a plot of the fraction of unaffected (surviving) cells versus drug concentration [21,22]. Serial dilutions of equipotent doses of the two agents in combination (VPA and 5′-DFUR) were tested. Synergism, additivity, or antagonism were quantified evaluating the combination index (CI) calculated by the Chou-Talalay equation with Calcusyn software (Biosoft) as described elsewhere [17,[23][24][25][26]. A CI < 0.9, CI = 0.9-1.2, and CI > 1.2 indicated synergistic, additive or antagonistic effect, respectively [17,25]. The dose reduction index (DRI) determines the magnitude of dose reduction allowed for each drug when given in synergistic combination, as compared with the concentration of a single agent that is needed to achieve the same effect level [21]. Clonogenic assay SW620, HT29, HCT-116 and HCT-116 p53 −/− cells were used for colony forming assay. Briefly around 80 or 100 cells were seeded in a 6-well flat-bottom plate and treated for 24 h with VPA 1 mM and/or 5′-DFUR at a concentration corresponding at IC 15 at 96 h. The following day, cells were placed in a waterequivalent phantom at a depth of 5 cm and exposed or not to a single 2 Gy irradiation with 6 MV photons, from an Elekta "Agility" linear accelerator. Colonies were allowed to grow for 12-14 days after RT, then collected, washed with PBS 1× and stained with 0.5% crystal violet in a solution 25% methanol in water for 30 min. Colonies were photographed, analysed and the colonies aria was evaluated using image-Pro-Plus (Immagini and Computer snc). Experiments were performed in triplicate and repeated at least 3 times. Immunofluorescent staining for γH2AX foci HT29, SW620, HCT-116 and HCT-116 p53 −/− cells were seeded 30,000 cells/well on rounded slides placed in a 24 well flat-bottom plate. Cells were treated or not with VPA and/or 5′-DFUR at a concentration corresponding at IC 30 or IC 50 at 96 h for 24 h and then exposed or not to a single dose irradiation (2 Gy) as described above. Cells were then collected 24 h after RT, washed with PBS 1×, pre-fixed with formaldehyde 4% in PBS 1× for 10 min at room temperature, washed with PBS 1×, permeabilized and fixed with 100% methanol at −20°C for 10 min. Cells were then stained for γH2AX antibody (green). After secondary antibody incubation, slides were mounted with DAPI (blue) mountant (ProLong® Gold Antifade Mountant with DAPI, Life Technologies) applied directly to fluorescently labeled cell on microscope slides. Slides were next analyzed using a fluorescence microscope (Axioscope.A1, Zeiss). Representative images show γH2AX-positive nuclear foci cells with 63× magnification. Flow cytometry analysis of apoptosis HT29 and SW620 cells were treated with VPA and/or 5′-DFUR, at the indicated concentrations as described for clonogenic assay, for 24 h and then exposed or not to 2 Gy RT. Apoptosis was measured 24 and 48 h after RT using the annexin V-fluorescein isothiocyanate (annexin V-FITC). Briefly, adherent cells were harvested, washed with PBS 1× and stained with annexin V-FITC. Annexin positive cells were quantified with FACScalibur flow cytometer (Becton Dickinson), considering fluorescence collected as FL1 (Log scale) and analysed using CellQuestPro software (Becton Dickinson). Data were acquired after analysis of at least 10,000 events. Flow cytometry analysis of cell cycle Analysis of cell cycle kinetic was performed at indicated times on HT29 and SW620 cells treated with VPA/5′-DFUR/RT combination treatment. Briefly, adherent and floating cells were harvested, fixed in 70% ethanol and stored at −20°C until analysis. After nuclear DNA staining with propidium iodide, flow cytometry was evaluated by a FACScalibur flow cytometer (Becton Dickinson). For each sample, 20,000 events were collected. Cell cycle analysis was performed with ModFit LT software (Verity Software House, Inc., Topsham, ME). FL2 area versus FL2 width gating was done to exclude doublets from the G2-M region. Statistics The results of in vitro cell proliferation are expressed as the means for at least three independent experiments done in quadruplicates, and the standard deviation (SD) is indicated. Representative results from western blotting, immunofluorescent staining for γH2AX foci as well as apoptosis and cell cycle analysis by flow cytometry (perfomed in triplicates) from a single experiment are presented; additional experiments yielded similar results. Appropriate statistical analyses were applied, assuming a normal sample distribution. Statistical significance in clonogenic assay was determined by the unpaired t-test. All statistical evaluations were done using GraphPad Prism 6 (GraphPad Software, Inc.). In vitro synergistic antitumor effects of VPA in combination with 5′-DFUR in CRC cells: role of TP, TS and p53 We first evaluated the antiproliferative effect of either VPA or the capecitabine metabolite 5′-DFUR, as single agent, on HT29, SW620, HCT-116 and HCT-116 p53 −/− cell lines. All examined CRC cell lines were equally sensitive to VPA treatment, independently from their intrinsic characteristics such as p53, KRAS, BRAF, PI3KCA status (Table 1), the basal expression of TS and TP proteins, or the basal histone-H3 acetylation (AcH3) (Fig. 1a). As shown in Fig. 1a, HT29 and SW620 cells expressed lower level of TP protein compared to both HCT-116 cell lines. Moreover, being p53-mut, HT29 and SW620 cells expressed higher p53 protein levels compared to HCT-116 cells (Fig. 1a). We confirmed that HCT-116 p53 −/− cells did not express significant levels of p53 protein (Fig. 1b). Interestingly, in the two p53-mut HT29 and SW620 cells, the lower expression of TP, the critical enzyme converting 5′-DFUR into the active compound 5-FU, correlates with the lower sensitivity to 5′-DFUR (Table 1 and Fig. 1a), consistently with our previous studies [18,19]. We next investigated VPA antitumor effect in combination with 5′-DFUR. Combined treatment with equipotent doses (50:50 cytotoxic ratio) of VPA and 5′-DFUR for 96 h, resulted in synergistic antiproliferative effect in HT29, SW620 and HCT-116 cell lines, as shown by CI values always lower than 0.9, calculated at 50% (CI 50 ), 75% (CI 75 ) or 90% (CI 90 ) of cell lethality ( Table 2). In addition, we demonstrated a reduction in the IC 50 values (DRI 50 ) for both VPA and 5′-DFUR in the combination setting compared with the two drugs used alone (Table 2). Interestingly, in HCT-116 p53 −/− we did not observe synergistic interaction between VPA and 5′-DFUR, as shown by the CI values higher than 1.2 and the lower DRI for 5′-DFUR compared with the HCT-116 p53-wt cells ( Table 2). Similar data were obtained in p53-wt, -mut and -null prostate cancer and non-smallcell lung cancer models. Here, we demonstrated a synergistic interaction between VPA/5′-DFUR in p53-wt or p53-mut cell lines, but not in p53-null cells, where we observed only an additive/antagonistic effect (unpublished results). To gain insight into the mechanism of the observed synergism, we tested the effect of increasing doses of VPA on TS, TP and p53 protein expression in HCT116 p53-wt and HCT116 p53 −/− cells and in p53-mut HT29 and SW620 cells. VPA, even at low doses (such as 0.5 and 1 mM), was able to up-regulate TP and to downregulate TS protein expression, within 24 h of treatment in all cell lines examined, in a dose-dependent manner, independently of p53 expression and status, as previously reported with alternative HDACi and/or other cell models [17] (Fig. 1c). Moreover, we observed that VPA up-regulates p53-wt and down-regulates p53-mut protein levels, in accord with previous data obtained by our group and others, using alternative HDACi [17,27]. Induction of AcH3 confirmed the dose-dependent HDACinhibitory activity of VPA in all treated cells (Fig. 1c). VPA/5′-DFUR combination sensitizes CRC cells to RT: role of p53 To evaluate if VPA/5′-DFUR combination can sensitize CRC cells to RT, cells were first treated for 24 h with VPA and/or 5′-DFUR and next exposed or not to 2 Gy RT. Notably, we performed most of further experiments with a low dosage of VPA (1 mM), easily reached in the plasma of patients treated with antiepileptic dosage [28] and also able to modulate TS and TP expression (Fig. 1c). We used colony formation assay and initially evaluated VPA/5′-DFUR plus RT on HCT-116 and HCT-116 p53 −/− cells. As shown in Fig. 2a, VPA/5′-DFUR treatment strongly reduced colony formation in HCT-116 cells compared to control or single agent treatments. However, this effect was not observed in the syngeneic HCT-116 p53 −/− cell line, confirming the data reported above by antiproliferative assay and CI evaluation. Furthermore, although synergistic inhibitory effect was observed by combining either VPA or 5′-DFUR with RT, VPA/5′-DFUR plus RT triple combination almost completely inhibited colony formation of HCT116 cells. HCT-116 p53 −/− cells appeared more sensitive to either RT or 5′-DFUR alone compared to parental cells, and the antitumor effect of RT was further increased in combination with 5′-DFUR. However in this cell line we did not observe any synergistic effect of RT in combination with VPA alone or in triple combination, being 5′-DFUR single agent treatment, with or without RT, comparable to VPA/5′-DFUR combined treatments (Fig. 2a). We also evaluated the treatments effect on DNA damage, by measuring γH2AX foci formation 24 h after RT. As shown in Fig. 2b, in HCT-116 cells VPA/5′-DFUR treatment increased the number of γH2AX foci compared to single agent treatments and this effect was clearly amplified by RT (Fig. 2b and Additional file 1: Figure S1A γH2AX foci formation compared to untreated cells, VPA and 5′-DFUR in combination did not demonstrate any synergistic effect, neither alone nor with RT, compared to single agent treatments ( Fig. 2a and Additional file 1: Figure S1B). We next tested VPA/5′-DFUR and RT interaction in HT29 and SW620 p53-mut cell lines. As shown by colony formation assay, VPA/5′-DFUR combination did not significantly increase the antitumor effect compared to single agent treatments in both cell lines. Similarly, the addition of RT to single agent treatments did not improve the antitumor effect in p53-mut cells. Conversely, a significant inhibition of colony formation was observed only in triple combination setting in both HT29 and SW620 cells (Fig. 3a). Furthermore, although we did not observe any synergistic effect in VPA/5′-DFUR combination by colony formation assay, we showed an increased number of γH2AX foci after VPA/5′-DFUR treatment compared to control or single drugs, in both HT29 and SW620 cell lines (Fig. 3b). The addition of VPA/ 5′-DFUR to RT was able to significantly increase γH2AX foci formation in the two cell lines (Fig. 3b and Additional file 2: Figure S2A and B). a b Fig. 2 Crucial role of p53 in the synergistic effect of VPA/5′-DFUR combination treatment with RT. a Clonogenic assay shows the long-term effects of combination treatment VPA/5′-DFUR plus 2 Gy RT on CRC cell lines in HCT-116 and HCT-116 p53 −/− collected 12-14 days after RT. A photograph of one well in a representative experiment is shown for each treatment; bar graphs show the area of colony with diameter > 250 μm (mean ± SD of 2 or more separate experiments each one with technical triplicate). * = p < 0.05; ** = p < 0.006; *** = p < 0.0005. b DNA damage was analyzed in HCT-116 and HCT-116 p53 −/− cell lines by visualizing DSB marker γH2AX foci. Cells treated for 24 h with or without VPA and/or 5′-DFUR at the indicated concentration, corresponding to IC 30 at 96 h and then with or without 2 Gy RT, were collected 24 h after RT. Cells were fixed, stained for γH2AX (green) and DAPI for nuclei (blue) and observed by microscope. Representative images show γH2AX-positive nuclear foci cells with 63× magnification Apoptotic bodies were observed in both HT29 and SW620 cells after 5′-DFUR or VPA/5′-DFUR combination treatment, as shown by phase-contrast microscopy, and this effect was further potentiated when RT was added, particularly in VPA/5′-DFUR/RT triple combinations (Additional file 3: Figure S3). Similarly, in SW620 cells we observed an induction of apoptosis upon VPA/ 5′-DFUR combination treatment compared to single agent treatments, further potentiated by 24 h exposure to RT, as demonstrated by flow cytometry analysis with annexin V-FITC staining. In RT-resistant HT29 cells no major induction of apoptosis was observed after 48 h of a b Fig. 3 Synergistic antiproliferative effect induced by VPA/5′-DFUR combination plus 2 Gy RT in CRC cell lines. HT29 and SW620 cells were treated or untreated with VPA 1 mM and 5′-DFUR at the indicated concentrations, corresponding to IC 30 at 96 h for 24 h and then with or without 2 Gy RT; a Clonogenic assay shows the long-term effects of combination treatment VPA/5′-DFUR plus 2 Gy RT on CRC cell lines HT29 and SW620 collected 12-14 days after RT. A photograph of one well in a representative experiment is shown for each treatment; bar graphs show the area of colony with diameter > 250 μm (mean ± SD of 2 or more separate experiments each one with technical triplicate). * = p < 0.05; ** = p < 0.006; *** = p < 0.0005. b DNA damage was analyzed in HT29 and SW620 by visualizing DSB marker γH2AX foci. Cells treated for 24 h with or without VPA and/or 5′-DFUR at the indicated concentrations, corresponding to IC 30 at 96 h and then with or without 2 Gy RT, were collected 24 h after RT. Cells were fixed, stained for γH2AX (green) and DAPI for nuclei (blue) and observed by microscope. Representative images show γH2AX-positive nuclear foci cells with 63× magnification RT treatment, compared to cells not exposed to RT either alone or in combination (Fig. 4a). Thereafter we analyzed the effects of our treatments on cell cycle modulation (Fig. 4b). It has been reported [29] that RT induced brief and moderate G2/M arrest with a peak at 6 h and returning to basal level at 10 to 12 h following 2 Gy RT. We observed a slight increase of G2/M arrest in SW620 cells after 24 h of treatment (15.6% of cells in G2/M phase in control cells vs 23.2% in 2 Gy RT irradiated cells), but not in RT resistant HT29 cells (11.41% vs 12.3%). Moreover we confirm in SW620 cells the ability of HDACi to abrogate the G2 arrest induced by RT as previously reported [29] (Fig. 4b). Additionally, we observed a strong S-phase arrest after 5′-DFUR treatment in both HT29 and SW620 and this was further increased by VPA in HT29 cells. In both cell lines RT reduce S-phase arrest compared to 5′-DFUR treated cells (Fig. 4b). In order to better define the mechanism of VPA/5′-DFUR and RT interaction in p53-mut cells, we next evaluated the expression of critical proteins potentially involved in the observed antitumor effects in p53-mut HT29 and SW620 cells by western blotting. Cells were treated for 24 h with VPA and/or 5′-DFUR, then exposed or not to 2 Gy RT and harvested after 48 h. First of all we confirmed that VPA alone or in combination with 5′-DFUR, in the presence or absence of RT is able to increase TP protein levels also at a later time point (after 72 h). After 48 h, RT alone slightly increased TP in both cell lines, in accord with previous reports [30,31]. This effect is further potentiated by VPA. Notably, as confirmed by densitometric analysis, TP upregulation is conserved (HT29 cells) or even potentiated (SW620 cells) in triple combination setting (Fig. 5). Moreover, we confirm that VPA reduces both basal and 5′-DFUR-induced TS protein expression as previously reported [17][18][19], in the presence or absence of RT. Notably, the formation of the ternary complex between the 5-FU metabolite FdUMP, the enzyme TS and 5,10-methylene tetrahydrofolate [17], highlighted by the upper bands in the western blot, is still achieved in the presence of VPA, indicating that TS downregulation does not affect the biochemical inhibition of the enzyme induced by 5-FU. We also confirmed that VPA/5′-DFUR combination treatment was able to induce DNA damage as demonstrated by γH2AX increased protein expression. Significantly the triple combination setting further increased γH2AX foci formation, in agreement with previous results (Fig. 5). Notably, we also demonstrated a a b Fig. 4 Pro-apoptotic effect induced by VPA/5′-DFUR combination plus 2 Gy RT in CRC cell lines. HT29 and SW620 cells were treated or untreated with VPA 1 mM and 5′-DFUR at the indicated concentration, corresponding to IC 30 at 96 h for 24 h followed or not by 2 Gy RT. a Apoptotic effect on HT29 and SW620 cells was evaluated by flow cytometry analysis upon Annexin V-FITC staining after treatment with or without VPA 1 mM and/or 5′-DFUR at the indicated concentration, corresponding to IC 15 at 96 h for 24 h and then with or without 2 Gy RT for 48 (HT29) and 24 (SW620) hours. b Cell cycle analysis was performed in HT29 and SW620 cell lines. The percentage of sub G1, G1, S and G2/M population were analyzed by flow cytometry on cells treated for 24 h with or without VPA 1 mM and 5′-DFUR at the indicated concentrations, corresponding to IC 15 at 96 h, followed or not by 24 h exposure to 2 Gy RT prolonged DNA damage up to 48 h after RT in combination setting. In agreement with these latter findings, upon VPA/5′-DFUR treatment with RT, we observed, in both HT29 and SW620 cell lines, a prolonged induction in the phosphorylation of ATM, the kinase mainly recognizing DNA-doublestrand breaks (DSB) by ionizing radiation, compared with RT alone. In both cell lines ATM phosphorylation was strongly induced also by VPA/5′-DFUR combination and in SW620 cells also by 5′-DFUR treatment alone (Fig. 5). The observed ATM phosphorylation/activity increase correlates with an increase in ATM protein expression. Indeed, we also observed an induction of p53 phosphorylation (serine 37) in both cell lines probably mediated by ATM induction, in combination treatment. In details, this is evident in the absence of RT, while p53 phosphorylation induction in triple combination is similar to that observed in 5′-DFUR/RT treated cells. Furthermore, in both HT29 and SW620 cells, the p53 activation induced upon VPA/5′-DFUR/RT combination treatment, is accompanied by the induction of the proapoptotic protein BAX, as compared to control or single agent treatments, an observation that can explain, at least in part, the apoptotic induction (Fig. 4a). Finally, we evaluated the expression of voltagedependent anion-selective channel protein 1 (VDAC-1), a protein involved in reactive oxygen species (ROS) generation and a key player in mitochondria-mediated apoptosis, that we have previously reported to be regulated by HDACi [32]. As shown in Fig. 5 in HT29 cells, we observed an increased expression of VDAC-1 after VPA/5′-DFUR treatment compared to single agent alone, further enhanced by RT. In SW620, in the absence of RT, we observed a clear increase in VDAC-1 expression after 5′-DFUR treatment alone. This effect was maintained in the presence of RT, also in the triple combination treatment. Discussion Fluoropyrimidine-based chemo-radiotherapy is a standard preoperative approach in LARC patients. HDACi have shown promising anticancer effects in both preclinical and clinical setting as radiosensitizers when administrated in combination with RT [7,8,[33][34][35]. In this study we report that the HDACi VPA in combination with capecitabine could be a suitable approach to use in combination with RT in CRC treatment in both p53-wt and p53-mut tumors. We and others have previously demonstrate that HDACi, including VPA, synergize with either 5-FU or capecitabine because are able to modulate the levels of two critical enzymes in the metabolism of fluoropyrimidines such as TP and TS [17,18,[36][37][38][39]. Notably, TP knockdown experiments confirmed a crucial role of TP protein up-regulation in the observed synergism [19]. In the present study we discovered for the first time that VPA/capecitabine combination treatment further synergizes with RT, as previously reported with the pan-HDACi vorinostat [35]. Moreover, we also confirmed modulation of both TS and TP protein levels by VPA in CRC models, even in the presence of RT. Interestingly, TP protein induction is achieved also at low doses of VPA (0.5-1 mM), corresponding to a plasma level between 50 and 100 μg/ml, easily reached in patients with normal anticonvulsant doses [28]. Although at these doses VPA did not induce growth inhibition as single agents, a significant synergistic antitumor effect was still demonstrated in combination with 5′-DFUR and RT, suggesting a specific mechanism of interaction as well as the feasibility to translate this approach in a clinical study. Furthermore, although our data suggest that VPA may increase sensitivity to fluoropyrimidines by specifically modulating both TS and TP expression, we also showed that p53 has a critical role in the observed synergism. Indeed, although we demonstrated that in p53-null HCT-116 p53 −/− cells VPA still modulates both TS and TP, no synergistic antitumor effect was observed in combination with 5′-DFUR and/or RT in this cell lines compared with p53-wt or p53-mut cells. Notably, we confirmed similar results in other cancer models. HDACi, such as VPA and vorinostat, are able to induce apoptosis independently of p53 status, while for others, such as entinostat, p53 is crucial for their activity [40,41]. However, previous reports have demonstrated that HDACi radiosensitization is p53-influenced through p53 acetylation-mediated c-myc down regulation [27]. We and others have shown that HDACi might restore/induce p53-wt expression by modulating the epigenetic suppression of the gene or by inhibiting protein degradation [17,41,42], in this way being able to potentiate anticancer drugs effect, promoting apoptosis. HDACi are also able to downregulate mutated p53, by transcriptional mechanism or by accelerating mutant protein degradation [17,43,44], in this way abrogating gain of function oncogenic properties, including mechanism of resistance to anticancer drugs. On the contrary, we speculate that, when p53 is deleted, cancer cells rely on different mechanisms in order to acquire resistance to anticancer drugs and, thus, the synergistic p53dependent effect exerted by HDACi in combination with anticancer drugs is lost. HDACs have recently been found to participate in the DNA damage response and their down-regulation has been associated with impaired DNA repair [45]. Considering that after 24 h the RT-induced γH2AX foci formation, and indicator of DSB, should have recovered to control levels, our data demonstrating a prolonged DNA damage up to 48 h after RT in combination setting, suggest that, mechanistically, VPA was able to prolong and further increase the DNA damage induced by 5′-DFUR and/or RT. This effect, most likely due to a decrease of repair rate of DSB, results in apoptosis and in the potentiation of the antitumor effect, specifically in the triple combination. As expected, this effect is particularly evident in p53-wt cell lines, but was abolished in p53-null cells. Remarkably, VPA/5′-DFUR/RT triple synergistic prolongation of DNA damage was also demonstrated in p53-mut cell lines. Accordingly, we demonstrated in p53-mut HT29 and SW620 cells that VPA in combination with 5′-DFUR and RT increased p53 phosphorylation at serine 37, a site of phosphorylation identified in cells following DNA damage after exposure to radiation [46]. It was reported that in both HT29 and SW620, despite mutated, p53 can be activated by phosphorylation and can modulate cell growth or death [47][48][49]. In response to DSB and DNA damage, ATM recognizes it and activates Chk2 that, subsequently, by phosphorylation activates p53 [50]. In details, in the context of p53-wt, stabilization and activation of p53 induces long-term cell-cycle arrest, apoptosis, or senescence by transcriptionally regulating, among others, the CDK inhibitor p21 and the pro-apoptotic protein BAX [51]. These effects do not count in the presence of p53mut, where p21 is not expressed and apoptosis events follow different pathways [52]. Generally, ATM phosphorylation is an early event in DNA damage [25]. Hehlgans et al. observed an induction of pATM upon 6 Gy treatment alone or in combination with a novel HDACi (NDACI054) after 0.25 h with a peak after 1 h. The ATM phosphorylation returns to basal levels 24 h after RT treatment [34]. We observed an increase in ATM phosphorylation up to 48 h post RT treatment. This prolonged effect was probably due to an increased in ATM protein expression in both HT29 and SW620 cell lines. Thus, we hypothesized that tumor cells induce ATM protein expression in order to maintain its activity as a consequence of the prolonged damage induced in combination setting and the inability to repair it. The ATM protein is a crucial player for the induction of cell cycle arrest following DSB generation, through cell cycle checkpoints (G1, intra S and G2/M). This phenomenon leads to efficient repair of DSB or cell death [53,54]. It has been reported that the abrogation of radiation-induced G2-M arrest by HDACi may decrease the time available for repair of DNA damage or may interfere with repair mechanisms [55], driving cells in apoptosis [56]. We observed only minor effects on cell cycle regulation in HT29 and SW620 cells, probably due to the presence of a mutated form of p53 in these cells or to the timing chosen in the analysis. Indeed Kim et al. [29] showed that the moderate G2/M arrest induced by RT was brief, with a peak at 6 h, and returning to basal level 10/12 h following 2 Gy RT. We observed only a slight increase of cells in G2/M in SW620 cells after 24 h of RT treatment, but not in RT resistant HT29 cells. It is important to underline that the strong S-phase arrest after 5′-DFUR treatment in both HT29 and SW620 was reduced in the triple combination treatment. This suggests that this mechanism could contribute to avoid DNA repair and prevent cell cycle progression. Furthermore, we observed an induction of apoptosis upon VPA/5′-DFUR combination treatment compared to single agent treatments in both p53-mut HT29 and SW620 cell lines. This effect is further potentiated by 24 h exposure to RT in SW620 cells, but not in HT29 RT resistant cells. We have previously demonstrated that the pro-apoptotic effect induced by the synergistic combination between HDACi vorinostat and EGFR inhibitors, was mediated by an altered mitochondria homeostasis, resulting in ROS accumulation [32]. We also found that vorinostat induced the expression of VDAC-1, the major mitochondrial porin of the outer mitochondrial membrane, involved in ROS generation and key player in mitochondria-mediated apoptosis. In its open state, VDAC-1 induces apoptosis mediating the translocation of the pro-apoptotic protein BAX, the release of cytochrome c and the activation of caspases [32,57]. Thus, VDAC-1 regulation could be functionally involved in oxidative-stress-dependent-apoptosis. In the current study our observations also suggest a possible role of VDAC-1 in the pro-apoptotic effect of VPA/5'-DFUR/RT combination in p53-mut cells. The concomitant increase in BAX protein could be the mechanism through which the triple combination treatment increases apoptosis. Indeed, it was reported that BAX may physically interact with VDAC-1 to yield a heterocomplex with increased permeability compared to VDAC-1 oligomer alone. This increases both ROS and cytochrome c release into the cytoplasm and activates apoptosis [58,59]. Taken together, our data suggested that the synergistic interaction between VPA, 5′-DFUR and RT can results in a convergent mechanism that induce impaired regulation of DNA repair pathway targeting ATM and the downstream partners, together with an alteration in ROS accumulation that leads to DNA damage and apoptosis. Remarkably, our results showed that this combination could be used even in the more complicated and poorly prognosis-characterized subset of p53-mutated patients, casting a new light on this approach. Conclusion In conclusion our findings show that the addition of VPA/capecitabine to RT is a feasible and promising strategy to improve the efficacy of preoperative treatment of LARC. On these bases we launched a phase I/II clinical study (V-ShoRT-R3 trial) [60] to explore whether the addition of both VPA and capecitabine to shortcourse RT before optimal radical surgery, might increase the pathologic complete tumor regression rate in lowmoderate risk rectal cancer patients (ClinicalTrials.gov number NCT01898104). Correlative studies, comparing normal mucosa with tumor and on blood samples, could identify predictive biomarkers and could add new insight into the mechanism of interaction between VPA, capecitabine and RT. In this regard, the impact of p53-null type will be explored because it may give a clue to a subset of patients that could not respond to the combinatory regimen. Additional files Additional file 1: Figure S1. DNA damage was analyzed in HCT-116 (A) and HCT-116 p53 −/− (B) by visualizing DSB marker γH2AX foci. Cells were treated with or without VPA and/or 5′-DFUR for 24 h at the indicated concentration: 1 and 1.5 mM for VPA corresponding to IC 30 at 96 h for HCT-116 and HCT-116 p53 −/− respectively; 1 μM for 5′-DFUR corresponding to IC 30 for both cell lines and 2 and 5 μM for 5′-DFUR corresponding to and IC 50 for HCT-116 and HCT-116 p53 −/− respectively at 96 h. Cells were then exposed or not to 2 Gy RT and then collected 24 h after RT, fixed and stained for γH2AX (green) and DAPI for nuclei (blue) and observed by microscope. Triplicates images of a representative experiment show γH2AX-positive nuclear foci cells with 63× magnification. (PPT 3021 kb) Additional file 2: Figure S2. DNA damage was analyzed in HT29 and SW620 by visualizing double strand break marker γH2AX foci. Cells were treated for 24 h with or without VPA and/or 5′-DFUR at the indicated concentration, corresponding to IC 30 for VPA and IC 30 and IC 50 for 5′-DFUR at 96 h. Cells were then exposed or not to 2 Gy RT and then collected 24 h after RT, fixed and stained for γH2AX (green) and DAPI for nuclei (blue) and observed by microscope. Triplicates images of a representative experiment show γH2AX-positive nuclear foci cells with 63× magnification. (PPT 2793 kb) Additional file 3: Figure S3. HT29 and SW620 cells were treated or untreated with VPA 1 mM and 5′-DFUR at the indicated concentration, corresponding to IC 30
v3-fos-license
2020-10-28T05:09:55.555Z
2020-10-27T00:00:00.000
225076949
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcecolevol.biomedcentral.com/track/pdf/10.1186/s12862-020-01702-8", "pdf_hash": "e3874f6eb05d91d7d5003ad45c7b4dd23cefa63d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1281", "s2fieldsofstudy": [ "Biology" ], "sha1": "2929ae7842b2b319a74b64213739f334f747496b", "year": 2020 }
pes2o/s2orc
Whole-genome resequencing provides insights into the evolution and divergence of the native domestic yaks of the Qinghai–Tibet Plateau Background On the Qinghai–Tibet Plateau, known as the roof ridge of the world, the yak is a precious cattle species that has been indispensable to the human beings living in this high-altitude area. However, the origin of domestication, dispersal route, and the divergence of domestic yaks from different areas are poorly understood. Results Here, we resequenced the genome of 91 domestic yak individuals from 31 populations and 1 wild yaks throughout China. Using a population genomics approach, we observed considerable genetic variation. Phylogenetic analysis suggested that the earliest domestications of yak occurred in the south-eastern QTP, followed by dispersal to the west QTP and northeast to SiChuang, Gansu, and Qinghai by two routes. Interestingly, we also found potential associations between the distribution of some breeds and historical trade routes such as the Silk Road and Tang-Tibet Ancient Road. Selective analysis identified 11 genes showing differentiation between domesticated and wild yaks and the potentially positively selected genes in each group were identified and compared among domesticated groups. We also detected an unbalanced pattern of introgression among domestic yak, wild yak, and Tibetan cattle. Conclusions Our research revealed population genetic evidence for three groups of domestic yaks. In addition to providing genomic evidence for the domestication history of yaks, we identified potential selected genes and introgression, which provide a theoretical basis and resources for the selective breeding of superior characters and high-quality yak. civilization with the most indispensable assistance of their domesticated yaks. Unlike other large herbivorous livestock (average weight greater than 40 kg), Tibetan sheep, which spread to QTP after the domestication [4,5], the yak is an endemic species and domestication of wild yak occurred on the QTP [6]. Thus, the origin of domestication of yaks and their dispersal route is an important strand of evidence for the history of human migration, exploitation, and development on the QTP. In addition, the detection of genomic differences among domestic yaks may help elucidate the underlying mechanisms of adaptation and facilitate selective breeding. Previous studies have investigated yaks at archaeological [7][8][9], mitochondrial [10][11][12], whole genome landscape [13] and population resequencing [14] levels. Whole genome sequencing and comparative genomics analysis in yak identified the expansion of gene families related to sensory perception and energy metabolism and some positively selected genes related to hypoxia and nutrition metabolism. Population genetic analysis identified 209 genes which relate to behavior and tameness and suggest that the domestication of yaks occurred in the QTP ~ 7300 years BP, followed by a six-fold increase in yak population size by 3600 years BP. Most previous studies focused on the differentiation and difference between domestic yaks and wild yaks. But the location of their earliest domestication, dispersal direction, and the difference among domestic yaks from different areas have not previously been studied at the genomic level. Compared with other livestock, domestic yaks have a lower degree of domestication. Domestic yaks have a wide range of interactions with wild yaks and genetic exchange occurs and is hard to avoid. In addition, genetic exchange among domestic yaks from different areas occurs frequently because of human activities. The phenotypic and character differences of domestic yaks are not conspicuous. To examine intraspecific genetic diversification and geographical distributions of genetic lineages, we performed whole genome resequencing of 91 domestic yaks distributed throughout the QTP, from XiZang, QingHai, SiChuang, YunNan, GanSu, and Xin-Jiang. Analyses of population genetics, selection, and demographic history built a database of genetic diversification resources for domestic yaks and revealed a series of interesting discoveries regarding their domestication and dispersal. Genome resequencing and genetic variation We generated whole-genome sequences of 91 domestic yaks from 31 different locations and 1 wild yak from the QTP (Additional file 1: Fig. S1). Sequence data from a further 17 wild yaks were downloaded from NCBI for analysis (Additional file 1: Table S1). The samples of domestic yak were widely distributed and include most of the nationally recognized varieties. In total, ~ 21 billion raw reads and ~ 2078 Gb of aligned high-quality data with an average depth of 7.2× were generated using Illumina sequencing technology (Additional file 1: Table S2). After SNP calling and subsequent stringent quality control, we obtained 44,296,018 high-quality SNPs for all 108 individuals, with a range of 6,019,569 to 14,518,382 SNPs per individual. Most of the SNPs were located in intergenic regions: 144,673 in exonic regions and 27,947 nonsynonymous SNPs. The SNP distribution characteristics were similar to those of other livestock like pigs [15] sheep [16]. The mutations or genotypes specific to the wild yak genomes will provide an important resource for breeding. We identified 330,962 wild group-specific SNPs; 2733 of these are located in coding regions and involve 1009 genes that were enriched for olfactory-related functions. The living environment of wild yaks is worse than domestic yaks. Wild yaks living on the higher altitude and don't have stable pastures, they need not only to prowl for food, but also to avoid predators (wolf ). The unusual olfactory might helped them to adapt to their environment and avoid predators. We also identified variations in gene regions specific to each group of domestic yaks. Although the differences among groups were not obvious, the group-specific variation at the genomic level was significative and will help to provide an insight into the relevant unique traits. In our samples, the most obvious phenotypic character divergence is the pure white hair specific to the Tianzhu group [17]. We identified 10 specific SNPs shared in all three TZ individuals, but no nonsynonymous SNPs were found. Population genetic phylogeny To identify the genetic relationships between domestic yaks, we constructed a phylogenetic tree of yak SNPs using the neighbor-joining method with wild yaks as an outgroup (Fig. 1). The results showed a relatively close genetic distance and indicated that the domestic yaks were divided into three main branches. The branch containing BQ, JC, JD, CT, ZD, and DQ separated first from wild yak. These areas are on the southeastern edge of the QTP, and, except for ZD, they are located at very similar latitudes. The Changdu area (N31°, E97°), at the center of these regions, was historically the populated area of the QTP. In conclusion, we suggest that the Changdu area was most likely the origin of the domesticated yaks and that they first dispersed to the east and west to BaQing and JingChuan, respectively. The second branch in the phylogenetic tree contained NR, GD, RD, SR, LZ, CN, ZB, SZ, SS, (KB, PL) and showed a small genetic distance from the first branch. These areas represent the core region of Tibet, including the central, west, and south of Tibet, suggesting that following successful domestication, yaks were gradually brought to the hinterland of the QTP. We defined this as the second dispersal of domestic yaks through the east of Tibet to the south and west. The third branch of phylogenetic tree contained XJ, JL, SB, LWQ, HH, GN, TZ, MW, DT, NY, GY, and BZ. Most of these are located at the edge of QTP in Qinghai, Gansu, Sichuang, Shanxi, Yunnan, and XinJiang. This suggested that the third dispersal of domestic yaks was from the center to the periphery of the plateau. Moreover, we found that the diversity of the third branch was higher than that of the other two branches. This is not only related to the hybridization of yaks in trade, but also to interspecies hybridization with wild yaks or cattle [18,19], for example the DaTong yak is a new breed created by hybridization with wild yaks. In comparison, most breeds in the second branch showed a lower diversity that was consistent with a more closed trading environment, harsher living environment, and smaller population (Additional file 1: Table S3). In addition, we constructed a phylogenetic tree including all the domestic yak data of a b c d Fig. 1 Population genetic structure of the yak populations studied. a Neighbor-joining (NJ) tree of 109 yak individuals generated using TreeBest software, with wild yaks as the outgroup. The group 1, group 2 and group 3 of domestic yaks are colored with red, blue and orange. b Plots of principal components 1 and 2 from PCA analysis of 92 Chinese domestic yak individuals using the GCTA software. The colors of samples are same with the a. c Population genetic structure of domestic yak and wild yak inferred from the ADMIXTURE analyses (K = 2-5) using whole-genome SNPs. d Linkage disequilibrium (LD) decay for the four separate groups/subgroups of populations measured by r 2 Qiu et al. [14]. The topology was similar for the samples from the present study, but the downloaded data formed narrow disordered clusters in the second branch and the third branch. This may reflect our more extensive sampling and more accurately defined breeds in the present study (Additional file 1: Fig. S2). Population component, diversity, and linkage disequilibrium Principal component analysis (PCA) and Bayesian model-based clustering analysis were employed to examine the phylogenetic groupings and provide any additional evidence. The PCA did not show significant separation among the samples of domestic yak, consistent with a single origin of domestication and the close genetic distance shown in the phylogenetic tree (Additional file 1: Fig. S3). However, when the wild yaks and ambiguous breeds (KB, PL, XJ) were excluded from the analysis, the three separate groupings of domestic yaks were evident ( Fig. 1). Furthermore, the first principal component (PC1) separated group 1 and group 2 from group 3; PC2 separated group 1 and group 3 from group 2; and PC3 seemed to separate group 1 from group 2 and group 3, but this was not clear (Additional file 1: Fig. S4). In the clustering analysis performed using ADMIX-TURE, with K = 2, the yaks were genetically divided into wild and domestic samples. As K increased (K = 3-5), the domestic samples were not separated into distinct breeds, suggesting extensive genetic admixture among living domesticated yaks (Fig. 1). The genome-wide average θ Π value for domestic groups was 0.93-1.2 × 10 -3 was similar to that for the wild yaks (1.1 × 10 -3 ), which is consistent with previous research [14] (Additional file 1: Table S3). In addition, the higher θ Π of group 3 and the lower θ Π of group 2 support the inferences related to trade and geographical enclosure. Linkage disequilibrium analysis suggested that the wild group exhibited a rapid decay rate and a low level of LD, whereas the group 3 yaks showed an overall slow decay rate and a high level of LD (Fig. 1). Selective analysis and comparison We calculated pairwise FST to quantify the genetic differentiation among the four groups. Pairwise FST ranged from 0.019 to 0.068 with an average of 0.043, which is smaller than that between diverged taurine cattle breeds [20], and is consistent with the gene flow occurring between wild and domestic yaks. Moreover, the FST between groups of domestic yaks ranged from 0.019 (group 2 vs group 3) to 0.027 (group 1 vs group 2), consistent with a very low degree of differentiation among the domestic breeds. FST between domestic and wild yaks ranged from 0.058 to 0.068, with the lowest FST occurring between group 3 and the wild group. This indicates a higher degree of crossbreeding between group 3 breeds and wild yaks, creating hybrid breeds such as the DaTong yak. Group 3 had a lower FST compared to those of the other domestic groups (Additional file 1: Table S4). This further indicated the higher degree of hybridization of group 3 and is consistent with the ADMIXTURE analysis results. Regions under directional selection should show specific signatures of variation, including high population differentiation, lower levels of nucleotide diversity, and long-range haplotype homozygosity [21]. To determine whether directional selection might have occurred in groups of domestic yaks, we first explored the genomic landscape of the population differentiation to identify candidate genes. Comparing the extremely high FST values (top 0.1%) with wild yaks using a sliding window analysis, we identified 37, 56, and 39 potential positively selected genes in groups 1, 2, and 3, respectively (Additional file 1: Fig. S5, Table S5). There were 11 genes (THEMIS2, XKR8, SMPDL3B, RPA2, TNFSF15, PACSIN2, TCIRG1, NDUFS8, CHKA, ALDH3B1, and SUV420H1) shared by all the three groups (Fig. 2), which represent the core differentiation genes between domesticated and wild yaks. Four of these genes (TCIRG1, NDUFS8, ALDH3B1, CHKA) play an important role in metabolic pathways. TCIRG1 encodes a subunit of a large protein complex known as a vacuolar H+-ATPase (V-ATPase). This protein helps regulate the pH of cells and their surrounding environment, and V-ATPase-dependent organelle acidification is necessary for intracellular processes such as protein sorting, zymogen activation, and receptor-mediated endocytosis [22]. NDUFS8 encodes a subunit of mitochondrial NADH: ubiquinone oxidoreductase, a multimeric enzyme of the respiratory chain responsible for NADH oxidation, ubiquinone reduction, and the ejection of protons from mitochondria [23]. ALDH3B1 encodes an isozyme that may play a major role in the detoxification of aldehydes generated by alcohol metabolism and lipid peroxidation [24]. CHKA has a key role in phospholipid biosynthesis and may contribute to tumor cell growth [25]. RPA2 can activate the ataxia telangiectasia and Rad3-related protein kinases to involved in DNA metabolism such as DNA replication, repair, recombination, telomere maintenance, and the responses to DNA damage [26]. The PACSIN2 gene encodes a member of the protein kinase C and casein kinase substrate in neurons family and is associated with disease and immunity [27]. Significant differentiation between domesticated and wild yaks was identified in 16,17, and 4 genes in groups 1, 2, and 3, respectively, which suggests differences in selection among the different groups of domestic yaks (Additional file 1: Fig. S5). For a more global comparison of the selection differences among domestic yaks, we relaxed the selection threshold to the top 1% FST region and performed the enrichment analysis of candidate genes. When comparing group 1 with group 2, the top 1% FST region contained 136 genes that were enriched for the GO terms MHC related (GO:0002504, GO:0042613) and molybdenum ion binding (GO:0030151). Several disease pathways were enriched such as type I diabetes mellitus, graftversus-host disease, rheumatoid arthritis, asthma, Leishmaniasis, and autoimmune thyroid disease (Additional file 1: Table S6, Fig. S6). Some immune related pathways were also enriched, such as allograft rejection, intestinal immune network for IgA production, antigen processing and presentation, and hematopoietic cell lineage. Thus, some of these genes may contribute to adaptation to the extremely similar but locally distinct environments of grass, insects, and climate. The comparison between group 1 and group 3 identified 206 selection difference genes that were enriched for similar GO and KEGG pathways to those mentioned above, but not molybdenum ion binding and hematopoietic cell lineage (Additional file 1: Table S6). The comparison between group 2 and group 3 identified 185 genes that were not significantly enriched for any pathways. A greater number of genes were identified for group 3 than for the other groups, but there was less enrichment, which may be related to their extensive distribution or higher degree of hybridization. To better understand the selection acting on the three domestic yak groups, we identified the potential selective-sweep region using the signatures of high FST and greater difference value of pi [14,16] (Additional file 1: Figs. S7-S9). We identified 298, 365, and 383 selectivesweep genes in groups 1, 2, and 3, respectively, and 70 of these were shared by all domestic yak groups, which reflects the influence of domestication (Additional file 1: Fig. S10, Table S7). Some of these genes were related to energy metabolism (PFKFB1, SLC25A10, MRPL12), nerve development and growth (ATP2B2, CACNA1B, GHRL), and phagocytes (ARFGAP3, HGS, CCDC137, ACTG1, ZC3H3, XKR7). PFKFB1 encodes a member of the bifunctional 6-phosphofructo-2-kinase family that forms a homodimer that catalyzes both the synthesis and degradation of fructose-2,6-biphosphate using independent catalytic domains [28]. GHRL is the ligand for growth hormone secretagogue receptor type 1 (GHSR), which induces the release of growth hormone from the pituitary. This has an appetite-stimulating effect, induces adiposity, stimulates gastric acid secretion, and is involved in growth regulation [29,30]. The selections of metabolic, organ development will be beneficial to their increased reproductive, the selection in nerve development probably have contributed to the taming of yaks, the selection in phagocytes and response to stimulus will be beneficial to their immunity and livability. For the selection genes specific to the three domestic groups, we found the main functions of group 1 genes are immunity and disease; two genes (KDM1A, VEGFD) related to primitive erythrocyte differentiation were found in group 2 [31,32]; and the specific selection genes of group 3 function in disease and metabolism. In addition, the wild yak's selective-sweep region may indicate the direction of natural selection. We found 24 overlap genes that were also identified for all three domestic groups and were enriched in functions of pathogenic Escherichia coli infection and immunity (leukocyte transendothelial migration, phagosome) (Additional file 1: Table S8). Introgression analysis in Yaks Introgression has occurred extensively in bovine species [18,19,33]. We identified gene introgression between domestic yaks, wild yaks, and Tibetan cattle using ChromoPainter [34] software, and found an interesting phenomenon of unbalanced introgression among these groups (Fig. 3, Additional file 2: [19]. It is noteworthy that introgression between Tibetan cattle and domestic yaks is more frequent than that between Tibetan cattle and wild yaks. Second, the introgression from wild yaks into domestic yaks was far less than that from domestic yaks into wild yaks (118 M vs 455 M). To further explore the influence of introgression on yak, we analyzed the related genes in the introgression area. First, we found 521 genes that overlapped the a c b Fig. 3 The distribution of introgression among wild yak, Tibetan cattle, and three group of domestic yaks. a Comparison of introgression from Tibetan cattle to yaks and vice versa for each group. b Comparison of introgression from wild yak to domestic yaks and vice versa for each group. c The overall pattern of introgression in each individual. The OTHER group contained the individuals from Xinjiang, Pali, Kangbu, and Geermu, which were difficult to distinguish in PCA area introgressed from yaks to Tibetan cattle, including 11 genes that function in bile secretion pathways, seven genes associated with endocrine and other factor-regulated calcium reabsorption, nine genes associated with gastric acid secretion, and six glyceride metabolismrelated genes (Additional file 1: Table S10). These genes related to digestion and nutrient absorption infiltrated from yaks into Tibetan cattle, helping them better absorb nutrients and energy in the scarce food structure on the plateau. Some introgression genes related to disease and signal transduction also help Tibetan cattle better adapt to the plateau environment. We found 129 genes in the region introgressed from Tibetan cattle into yaks that are involved in signal transduction, physiological regulation (circadian rhythm), metabolism of phosphoinositide pathways, but no significant enrichment (Additional file 1: Table S11). The origin of yak domestication In common, comparing to populations derived through subsequent migration colonization, the populations near to a centre of initial origin are expected to show higher haplotypic and nucleotide diversity as they maintain more ancestral variation. However, this genetic signature might be blurred by the recent gene flow after domestication. Group 3 had a highest θ Π of 0.00111447, compare to 0.000868238 (group 1) and 0.000856952 (group 2), but the widely geography range, disordered phylogeny of three sample from same region and the highest gene exchange among group 3 with Tibetan cattle, wild yaks and other domestic yaks didn't support it as the origin of domestication. In contrast, the group 1 have an earlier divergence and higher genetic diversity than group 2. The inference of origin is consistent with the history of human transformation from a nomadic society to an agricultural society on the QTP, and is consistent with the previous suggestion through mitochondrial DNA by Guo et al. [35]. In detail, fossil records show that the ancient Qiang nationality lived and formed primitive villages on group 1 area as early as 5000 years ago, and the yaks were domesticated by the Qiang nationality and certainly this great achievement would have improved their lifestyle and helped them exploit, and settle down on, the QTP. Interestingly, the dispersal of domestic yaks in our inference was consistent with the migration of the Qiang nationality. After the dispersal of domestic yaks through the east of Tibet to the south and west, they dispersed from the center to the periphery of the plateau. The wide geographic range of samples in group 3 branch may indicate that this dispersal might have been associated with frequent human activity after the advent of largescale husbandry. Trade might be the principal driver of the dispersal, and given the ancient trading history of China, we suggest that the Tang-Bo Ancient Road and Silk Roads played important roles in the spread of domestic yaks. Although the SB and NY yaks were located closer to the second branch breeds, they clustered in the third branch because they were the tributes collected from other areas owned by the ruling class of Tibet, and many of these tributes were transported through the Tang-Bo Ancient Road. The separate clustering of three individuals of the same breed also reflects the exchange of these yaks. The unbalanced introgression In our result, we found the introgression from Tibetan cattle to yaks was far less than that from yaks to Tibetan cattle. It suggested that the successful adaptation of the cattle to the plateau environment depends on the gene introgression from yak. The introgression between Tibetan cattle and domestic yaks is more frequent than that between Tibetan cattle and wild yaks, we believe that this is related to domestication and breeding in that domestic yaks have more opportunities to exchange genes with Tibetan cattle. We also found the introgression from wild yaks into domestic yaks was far less than that from domestic yaks into wild yaks, this was related to the selective breeding of domestic yaks, however, the small population of wild yaks, the poorer living environment, and the bottleneck effect [36] could also lead to lower retention of introgression in wild yaks. In addition, we detected more introgression between the wild yaks and group 3 than groups 1 and 2, which may be due to the fact that the yaks in group 3 experienced more trade contacts and artificial breeding. For example, the DaTong yak is a new crossbreed between domestic yak and wild yak, and the white hair of Tianzhu yak is also produced by long-term artificial breeding. In the areas of introgression between domestic yak and wild yak, we found 2608 (domestic to wild) and 307 genes (wild to domestic). These two sets of genes showed similar pathway enrichment results (Additional file 1: Tables S12, S13), for example, focal adhesion, mucin type o-glycan biosynthesis, and arrhythmogenic right ventricular cardiomyopathy (ARVC), indicating that introgression between domestic and wild yaks is likely a balanced exchange in function, which may lead to heterosis. Conclusions Using whole-genome sequencing, our population genomic analyses of domesticated yaks provide new insights into their origin, historical migrations, and introgression events. We have clarified the phylogenetic relationships among the main breeds of domesticated yak, which formed three separate groups. Group 1 was distributed in the southeast of the QTP, which was indicated as the origin of domesticated yaks. Subsequently, yaks spread to the hinterland of Tibet with the ancient Qiang and became group 2. With the increase in human activities, yaks gradually spread to all parts of the plateau, forming group 3 with frequent exchange among breeds. We identified 11 genes related to metabolism and immunity that showed significant selection between domestic yak and wild yak. Although the phenotypic differences among the three groups were subtle, we were able to identify differences among them in genes functioning in disease and energy metabolism. We also characterized the patterns of introgression among domestic yak, wild yak, and Tibetan cattle, and we detected several instances of unbalanced introgression. Higher introgression was detected from yak to Tibetan cattle, and from domestic yak to wild yak, than vice versa. Our study provides an insight into the genetic differences of domestic yak and our results will be beneficial for selective breeding. Phylogenetic and population genetic analyses An individual-based Neighbor-Joining phylogenetic tree was constructed from the population scale SNPs using TreeBest [41] software under the p-distances model. We performed PCA with population-scale SNPs using the packages EIGENSOFT [42] and GCTA [43], and the significance level of the eigenvectors was determined using the Tracy-Widom test. Population genetic structure and individual ancestry proportions (admixture) were inferred using the program ADMIXTURE version 1.3.0 [44], which employs a maximum likelihood approach and the expectation-maximization algorithm. We increased the pre-defined genetic clusters from K = 2 to K = 5. Vcftools was used to calculation the population genetic values: pairwise nucleotide variation as a measure of variability (θπ), genetic differentiation (FST), and selection statistics (Tajima's D, a measure of selection in the genome). A sliding-window approach (100-kb windows sliding in 10-kb steps) was applied to calculation and the per-site basis calculation of pi and FST was performed with -site-pi and -weir-fst-pop. To estimate linkage disequilibrium, we calculated r 2 between any two loci using PopLDdecay [45]. The average r 2 value was calculated for pairwise markers in a 500-kb window and averaged across the whole genome. The demographic history of yaks from geographically diverse populations was inferred using a hidden Markov model approach as implemented in pairwise sequentially Markovian coalescent (PSMC) analysis [46] based on SNP distribution. Identification of selected regions For each group, FST values in the top 0.1% region were taken to indicate significant selection and FST values in the top 1% were taken to indicate selected regions. To detect regions with significant signatures of selective sweep, we considered the distribution of the θ π ratios (θ π, population 1/θ π, population 2) and FST values. We used an empirical procedure and selected windows simultaneously with significant low and high θ π ratios (the 2% left and right tails) and significant high FST values (the 2% right tail) of the empirical distribution as regions with strong selective sweep signals along the genome, which should harbor genes that underwent a selective sweep. Protein-coding genes in these outlier windows were treated as candidate positively selected genes. Moreover, EigenGWAS [47] was used to identify the loci under selection, based on a method that uses genome-wide association studies of eigenvectors to identify loci under selection. As EigenGWAS categorizes samples based on PCA, we performed this analysis between each pair of groups of yaks. Subsequently, we determined the genes under selection that were identified by both of these methods. Gene enrichment analysis was performed with EnrichPipelin [48]. Benjamini-Hochberg FDR [49] (false discovery rate) was used to correct the P values for multiple testing. The Gene Ontology categories "Molecular Function, " "Biological Process, " and "Cellular
v3-fos-license
2019-07-25T13:05:59.825Z
2019-07-25T00:00:00.000
198189426
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2019.00528/pdf", "pdf_hash": "0b8d48829b3d65a3ecf5cd3145738252080bfad4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1282", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0b8d48829b3d65a3ecf5cd3145738252080bfad4", "year": 2019 }
pes2o/s2orc
Lack of Associations Between Dietary Intake and Gastrointestinal Symptoms in Autism Spectrum Disorder Background: Many individuals with autism spectrum disorder (ASD) have significant gastrointestinal (GI) symptoms, but their etiology is currently unknown. Dietary interventions are common in children and adolescents with ASD, including diets with increased omega-3 fatty acids or diets free of gluten and/or casein, which may also impact GI symptoms and nutrition. However, little is known about the relationship between nutritional intake and GI symptomatology in ASD. The objective of this study was to assess the relationships between GI symptoms, omega-3 intake, micronutrients, and macronutrients in children with ASD. Methods: A total of 120 children diagnosed with ASD participated in this multisite study. A food frequency questionnaire was completed by the patient’s caretaker. The USDA Food Composition Database was utilized to provide nutritional data for the food items consumed by each participant. GI symptomatology was assessed using a validated questionnaire on pediatric gastrointestinal symptoms. Results: There were no significant associations between GI symptoms and the amount of omega-3 fatty acids and/or other micro- and macronutrients contained in the diet. Conclusions: This study suggests that dietary variations do not appear to drive GI symptoms, nor do GI symptoms drive dietary variations in those with ASD, although causation cannot be determined with this observational assessment. Furthermore, there may be other factors associated with lower GI tract symptoms in ASD, such as increased stress response. INTRODUCTION Autism spectrum disorder (ASD) is a neurodevelopmental condition characterized by persistent deficits in social communication and social interaction, as well as restricted and repetitive patterns of behavior that present during early development and result in clinically significant impairment (1). Research has shown that children with ASD tend to have more gastrointestinal (GI) symptoms than their typically developing peers (2)(3)(4)(5)(6), especially for constipation, diarrhea, and abdominal pain (2,(7)(8)(9). A review of the literature in 2010 indicated that the proportion of ASD individuals having co-occurring GI problems may range from 9 to 91% (10), the reported variations in prevalence rates potentially resulting from differences in diagnostic methods used to assess GI symptoms in this population. Despite the relatively high rates of GI symptoms in ASD, the etiology is poorly understood. Therefore, it is important to explore the role of dietary associations with GI symptoms in ASD given the potential for certain diets to be associated with GI dysfunction in ASD. Many individuals with ASD have used complementary and alternative medicine (CAM) approaches, including dietary changes, as part of their treatment for the core ASD symptoms, as well as GI disturbance, sleep problems, or in the promotion of general health (11). As such, changes in diet may affect GI functioning in ASD. Some studies have shown that children with ASD may be deficient in micro-and macronutrients (12)(13)(14)(15), as well as iron (16), which could result from altered GI function and/or potentially impact GI symptoms. Furthermore, many parents and caretakers have employed the use of gluten-and casein-free (GFCF) diets (17) that seem to have mixed effects on core ASD symptoms (18) and GI symptoms (19)(20)(21)(22)(23) in ASD. In addition, many families also administer omega-3 fatty acids in the hope of deriving benefit, but the results from randomized, placebo-controlled clinical trials of omega-3 supplementation in ASD are also mixed in most cases (24,25). In addition, limited dietary intake and selective food preferences, common among individuals with ASD (26), can result in nutritional deficiencies or other problems that could potentially interact with GI symptoms, either contributing to these symptoms or emerging in response to them. As such, a better understanding of the association between dietary intake on GI functioning in ASD is of interest, especially given the implications for treatment. The focus of the present study is to assess the associations between approximate omega-3 intake and micro-and macronutrient intake over the prior month and self-and parent-reported GI symptoms in individuals with ASD with the goal of determining whether dietary factors may be related to GI symptomatology. METHODS A total of 120 patients with ASD (mean age = 11.8, SD = 3.8, range = 6-18, 108 male, 92.5% Caucasian, mean full-scale intelligence quotient = 84, SD = 22.6, range = 36-130) participated in this study. Patients were recruited sequentially from individuals enrolled in the Autism Speaks Autism Treatment Network (AS-ATN) registries at the University of Missouri Thompson Center for Autism & Neurodevelopmental Disorders in Columbia, Missouri, and at the Vanderbilt Kennedy Center and Monroe Carrell Jr. Children's Hospital at Vanderbilt University in Nashville, Tennessee. To expand the sample, additional patients who were not enrolled in the AS-ATN were recruited from clinic patients at each site. Diagnosis of ASD was made based on Diagnostic and Statistical Manual for Mental Disorders IV-TR criteria (27) and the administration of the Autism Diagnostic Observation Schedule (ADOS) (28). Patients with known genetic or metabolic disorders or bleeding disorders were excluded from this study, as an associated portion of this project involved drawing blood. A more detailed explanation of the inclusionary and exclusionary criteria can be found elsewhere (29,30). This study was carried out in accordance with the recommendations of the Institutional Review Boards at the University of Missouri and Vanderbilt University, with written informed consent from all participants over the age of 18 and consent from the parent/guardian and assent from those under the age of 18. All participants gave written informed consent in accordance with the Declaration of Helsinki. The study protocol was approved by the Institutional Review Boards at the University of Missouri and Vanderbilt University. Assessment of Gastrointestinal Symptoms Gastrointestinal symptoms were assessed based on parent or self-report using the Questionnaire of Pediatric Gastrointestinal Symptomatology-Rome III (QPGS-RIII) (31). Patients who were over the age of 18 and able to provide an accurate account of their GI symptoms, as determined by asking the caretaker and/or the patient if they could reliably report their GI symptoms over the month before participation in the study, completed the self-report version of the QPGS-RIII. Otherwise, the QPGS-RIII was completed by a parent or caretaker that could provide a reliable account of the patient's GI functioning over the month before their participation in the study. A scoring rubric previously created by the research team was used to create continuous variables for upper and lower GI tract symptoms over the past month (29). Briefly, items from the QPGS-RIII were sorted into upper and lower GI tract symptoms, and the scores for each were summed to reflect an overall upper and lower GI score for each patient. Greater QPGS-RIII scores indicate greater frequency, severity, and duration of GI symptoms. Assessment of Nutritional Intake Omega-3 nutritional intake during this same period of time (1 month before entering the study) for each patient was assessed using a food frequency questionnaire (FFQ) that was designed to assess omega-3 fatty acid intake as well as information that could be utilized to calculate micro-and macronutrient intake (32). The 152-item FFQ was developed based on foods that contain 10 mg of n-3 fatty acids/medium serving from fish, animal, and plant sources. The FFQ was either completed by the patient's parent or caregiver or by the patient. Responses were analyzed Abbreviations: ASD, autism spectrum disorder; GI, gastrointestinal; AS-ATN, Autism Speaks Autism Treatment Network. for nutritional intake using the online, publicly available United States Department of Agriculture Food Composition Database (33), which provides nutrient information for specific foods. A total monthly estimate of the patient's nutritional intake was calculated by summing the nutrient information for each food based on the serving size and frequency of consumption. Total nutrient scores were created for both University of Missouri and Vanderbilt AS-ATN sites to determine if differences exist between midwestern and southern region diets. The individuals scoring the FFQs were blinded to the patient's GI status. Gastrointestinal Symptoms Both sites had similar number of patients with upper and lower GI tract problems [t(113) = 1.608, p = 0.111]. Therefore, the two populations were pooled for the primary comparisons. In addition, upper GI tract problems were significantly correlated with lower GI tract problems (r = 0.411, p < 0.001). The most common GI disorder reported by the participants was functional constipation (42.5%), followed by irritable bowel syndrome (11.7%) and lower bowel pain associated with bowel symptoms (9.2%). A more detailed description of the GI disorders experienced by this study population as well as how the GI scores were calculated can be found in previous reports (29,30). Omega-3 and Dietary Nutrient Intake See Table 1 for approximate mean monthly nutrient intake values across both the University of Missouri and Vanderbilt sites. First, as the nutrient variables were significantly skewed and non-normal, nonparametric Spearman rank correlations were conducted on the full dataset (i.e., without removing outliers), which do not make assumptions about the normal distribution of the data. In this way, we could assess the potential contribution of any extreme diets and picky eaters. Total GI tract symptoms were not significantly correlated with fatty acids (r s = 0.145, p = 0.20), gluten (r s = 0.114, However, fiber was positively correlated with upper and lower GI tract symptoms (r s = 0.243, p = 0.030) when outliers were included. Furthermore, it is not unusual for extreme points to increase the strength of a correlation, and the sample included three participants who consumed over 656 g of fiber in the past month. Next, we wished to reanalyze the data excluding the outliers. As such, 173 outlier values (3.7% of the data) were removed using the interquartile range rule (i.e., values >1.5 times the interquartile range), creating a normally distributed dataset that could be analyzed using Pearson correlations. See Table 1 for the number of patients remaining for each micro-and macronutrient. Estimated omega-3 fatty acids were not significantly correlated with upper or lower GI tract symptoms across both sites (r = 0.10, p = 0.304). Furthermore, upper and lower GI tract symptoms were not significantly correlated with the consumption of gluten, casein, water, calories, protein, fats, carbohydrates, fiber, sugar, or any vitamins, minerals, or cholesterol. See Table 2 for Pearson correlations between upper and lower GI tract problems and each nutrient intake value. DISCUSSION Previous research from this multidisciplinary, investigative team found associations between the stress response and GI symptoms among those with ASD (29,30). General nutritional intake as well as consuming foods high in n3-PUFA may also, however, affect GI symptoms, or GI symptoms might affect diet. Thus, we sought to examine the association between nutritional intake and GI symptoms in the same group of individuals from the aforementioned study. The present study was conducted to specifically examine the effects of diets that are high in n-3 PUFA on GI problems in those with ASD. The results from this multisite study indicate no association between consumption of a diet that is high in n-3 PUFA and upper or lower GI tract symptoms in the study sample. Furthermore, micro-and macronutrients contained in the diet were also not significantly associated with upper and/or lower GI tract symptoms in the sample. These results suggest that previous relationships between stress reactivity and GI symptomatology are not due to dietary factors, at least those assessed herein, and begins to provide evidence against the concept of dietary factors impacting GI symptomatology or of GI symptomatology impacting diet in children with ASD. The isolated finding of a positive correlation between fiber intake and GI symptomatology before excluding outliers may result from unsuccessful attempts to manage the GI symptoms with high fiber intake. Indeed, dietary fiber has been shown to be associated with GI symptoms of abdominal pain, bloating, and constipation, flatulence, and diarrhea (34,35). Therefore, parents, caratakers, and clinicians should be aware of this finding as well as recommended fiber intake (36,37) when considering treatment of abdominal pain and constipation in children with ASD. There are a number of limitations in this study that should be addressed. First, the study did not examine the effects of altered diets on GI functioning in the sample. As many individuals with ASD have altered diets, it is a possibile that a subgroup of autistic individuals with altered diets may have concomitant alterations in GI functioning. Thus, future research should examine the effects of altred diets on GI symptoms in ASD. Second, the present study utilized a food frequency questionnaire that contained a limited number of food items. While the questionnaire contains a wide range of food items, it is not exhaustive. Furthermore, it is not clear if taking dietary supplements by participants could be related to their GI symptoms. Future research may wish to utilize a food diary to log food items and amounts consumed per day as well as assess whether or not the participant is taking dietary supplements in an attempt to reduce ASD symptoms. Third, the sample was largely male and Caucasian, and so it is not clear if the results transfer to female and other ethnicities. Fourth, the QPGS depends on an informant, usually a parent, to answer a questionnaire regarding their child's GI functioning, including identification of the location of abdominal discomfort. Given that many children with ASD are nonverbal or have limited verbal abilities, it is possible that the GI scores may not be accurate for all participants. Future GI investigations should utilize formal gastroenterological evaluations or, at minimum, consider the use of ASD-specific measures of GI symptoms (38). Finally, the results presented herein will need to be replicated before drawing conclusions regarding the relationship between diet and GI disorders in ASD in the broader population. Larger samples would also allow incorporation of other co-occurring conditions to examine their relationships with the results from this study, as well as a better ability to recognize subtypes in the heterogenous ASD population. CONCLUSION The results from this study indicate no significant associations between dietary omega-3 and GI symptoms as well as dietary micro-and macronutrient intake and GI symptoms in a sample of 120 individuals with ASD, in whom relationships were previously observed and reported between stress reactivity and GI symptoms. These findings suggest that dietary changes do not appear to be driving GI symptoms nor do GI symptoms appear to impact dietary behavior among those with ASD. DATA AVAILABILITY The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT This study was carried out in accordance with the recommendations of the Institutional Review Boards at the University of Missouri and Vanderbilt University with written informed consent from all participants under the age of 18. All participants gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Institutional Review Boards at the University of Missouri and Vanderbilt University. AUTHOR CONTRIBUTIONS BF conceptualized and designed the study, coordinated and supervised data collection, drafted the initial manuscript, and reviewed and revised the manuscript. ShM and DS processed and entered data, conducted a literature review, and assisted with preparation of the manuscript. KD carried out the statistical analyses and assisted with preparation of the manuscript. SaM collected the GI and diet data from patients at Vanderbilt University. JV-V supervised data collection at Vanderbilt University and revised the manuscript. KG, KS, and MB provided their expertise and guidance regarding autism spectrum disorder and revised the manuscript. DB supervised the research team and provided expertise and guidance regarding autism spectrum disorder and gastrointestinal disorders in autism and revised the manuscript. All authors approved of the final manuscript as submitted.
v3-fos-license
2021-05-04T22:05:19.508Z
2021-04-07T00:00:00.000
233523243
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/11/8/3298/pdf?version=1617941032", "pdf_hash": "427368e6b6d28981697d2b84ce73a3083139cba6", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1283", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "ba6362863e1712f83a8b641e993d8da331a665c4", "year": 2021 }
pes2o/s2orc
Erasure-Coding-Based Storage and Recovery for Distributed Exascale Storage Systems : Various techniques have been used in distributed file systems for data availability and stability. Typically, a method for storing data in a replication technique-based distributed file system is used, but due to the problem of space efficiency, an erasure-coding (EC) technique has been utilized more recently. The EC technique improves the space efficiency problem more than the replication technique does. However, the EC technique has various performance degradation factors, such as encoding and decoding and input and output (I/O) degradation. Thus, this study proposes a buffering and combining technique in which various I/O requests that occurred during encoding in an EC-based distributed file system are combined into one and processed. In addition, it proposes four recovery measures (disk input/output load distribution Introduction In recent years, big data-based technologies have been studied in various fields, including artificial intelligence, Internet of Things, and cloud computing.In addition, the need for large-scale storage and distributed file systems to store and process big data efficiently has increased [1][2][3]. The distributed file system is a method for distributing and storing data, and Hadoop is typically used as a distributed file system [4][5][6].Hadoop consists of distributed file storage technology and parallel processing technology; only the former is discussed in this study.The distributed file storage technology in Hadoop is called Hadoop distributed file system (HDFS), in which a replication technique is used to block data to be stored into a certain size of blocks and replicate and store them [7][8][9].However, a replication technique requires a large amount of physical disk space to store the divided block replicas.In particular, for companies that store and process a large scale of data, much cost is incurred in the implementation and management of data because system scale exponentially increases [10][11][12].To solve the problem of space efficiency, an erasure-coding (EC) technique has been adopted in the HDFS [13][14][15]. The EC technique in the HDFS stores data by encoding original data and striping them into K data cells and M parity cells [16][17][18].In the replication technique, blocks are replicated and stored, whereas the distributed storage through encoding in the EC technique adds only parity cells to existing data cells, which achieves better space efficiency than that of the replication technique and is suitable for backup and compression [19,20].However, since data and parity cells are stored in a number of DataNodes in a distributed manner through encoding, a single disk input and output (I/O) is converted into a large number of a tiny amount of disk I/Os.Here, the performance degradation in an overall system may occur through a large number of tiny amounts of disk I/Os, and the I/O performance can be rapidly degraded as the volumes of K and M (that create data and parity cells, respectively) become larger and the striping size becomes smaller [21][22][23][24][25]. The replication technique employs replicated data at the time of failure, whereas the EC technique employs decoding when recovering data.Since decoding restores data through a large number of disk I/O operations, it has more disk I/O loads than the replication technique does [26][27][28]. EC is a low-cost recovery technique for fault-tolerant storage that is now widely used in today's distributed storage systems (DSSs).For example, it is used by enterprise-level DSSs [5,29,30] and many open source DSSs [31][32][33][34][35]. Unlike replication, which simply creates identical copies of data to tolerate failure without data loss, EC has much less storage overhead by encoding/decoding data copies.In addition, it can maintain the same level of fault tolerance as the replication technique.[36].DSS implements EC mostly based on the Reed-Solomon (RS) code [37], but some DSSs, including HDFS, Ceph [34], and Swift [32], provide various EC and EC configuration control functions. As described above, existing DSSs solved the deterioration factor of EC based on parallel processing and disabled I/O to some extent, but consider the I/O load problem between disks that occurs when EC is applied in a parallel processing method.I did not do this, which is expected to cause a large amount of I/O load between disks when recovering in parallel on an EC-based file distribution system.Therefore, this study proposes actions that reduce small amounts of disk I/Os in the EC-based HDFS, discusses issues encountered during parallel recovery, and suggests actions to address them. The main idea of our paper is the input/output buffering and combining technique that combines and processes multiple input/output requests that occur during encoding in an EC-based distributed file system.In addition, it is a disk input/output load balancing technique that is a recovery method to distribute the disk input/output load that occurs during decoding.More importantly, for file recovery, disk I/O load distribution, random block placement, and matrix recycling techniques are applied.The values and contributions of our labor profession are summarized as follows. The input/output buffering step does not wait for input/output requests for the same basic block group during input/output processing, but creates a secondary queue for the block group processing input/output and collects the waiting input/output. In the input/output buffering process, first, check if there is a secondary queue.If it does not exist, it creates a secondary queue and adds the waiting input/output to the secondary queue.If there is a secondary queue, add the requested input/output to the created secondary queue.When the addition is completed, the secondary queue setting is finally released.Since input/output waiting in the primary queue can be minimized through input/output buffering, system performance degradation can be prevented.The input/output combining step is not processed as a single input/output unit when the worker processes input/output, but merges the input/output accumulated in the secondary queue through input/output buffering and processes it as one input/output, say the steps. Multiple inputs and outputs can be processed into one through input/output combining, and input/output efficiency is greatly improved by reducing the overall network load and contention by increasing the size of the inputs and outputs and reducing the number of inputs and outputs.The disk input/output load balancing method is a method of placing a load information table in which the name node can check the access load of the disk, and performing recovery only when the disk requiring access can handle the load when a recovery request is made.In this paper, in order to distribute the disk input/output load, one disk allows only one input/output at the same time during failure recovery.Therefore, before the recovery request, the recovery worker checks whether the disk requiring access is used and decides whether to proceed with the recovery.When the recovery request is made, the accessed disk is indicated that it is in use, and the used disk is used when the recovery is completed.The method of indicating that the result was done was applied to the EC-based HDFS.In addition, a method of performing recovery after confirming whether or not the disk is used was applied. The organization of this paper starts with the introduction, followed by a description of replication and EC techniques used in a distributed file system in Sections 2 and 3. In Section 4, a problem occurring when data are stored through encoding and a measure to reduce this is presented.In Section 5, a problem occurring when data are recovered through decoding and a measure to solve this is presented.In Section 6, the superiority of the system that is applied to the EC-based HDFS is verified through experiments and evaluations.Finally, in Section 7, conclusions are made. Related Work DSS provides cloud storage services by storing data on commercial storage servers.Typically, data are protected from failure of these commodity servers through replication.EC is replacing replication in many DSSs as it consumes less storage overhead than replication to tolerate the same number of errors.At large storage scales, EC is increasingly attractive for distributed storage systems.This is due to low storage overhead and high fault tolerance. Therefore, many DSSs, such as HDFS [9,38], OpenStack Swift [39], Google File System [29], and Windows Azure Storage [5], are using erasure coding as an alternative.In most cases, the RS code is chosen by these distributed storage systems.However, they all pick only one kind of delete code with fixed parameters.Then, in a dynamic workload where data demand is very skewed, it is difficult to exchange the storage overhead with the reconfiguration overhead of erasure code [40][41][42].Existing RS codes can cause high overhead for reconfiguration when some data are not available due to an error inside the DSS [24,43].There is growing interest in improving EC's reconfiguration overhead. EC [44] can lower reconfiguration overhead by allowing unusable data to be reconstructed on a small number of different servers.Similar ideas were applied to the designs of other ECs [5,23,43,45,46].On the other hand, another family of erasure codes, called regeneration codes, are designed to achieve optimal network transmission in reconfiguration [21].However, most of these distributed storage systems only distribute one erasure code to encode the data and are optimized for storage overhead or reconfiguration overhead.However, data demand can be very distorted in real distributed storage systems.This means that data with different demands may have different performance targets, so applying one erasure code for all data may not meet all targets. Additionally, recently, to improve the recovery performance of EC, a number of studies have been conducted by applying parallel processing techniques from various viewpoints [47,48].Typically, studies on optimization of EC setting or minimization of network contention have been conducted when applying EC as a parallel processing basis.Studies on the optimization of EC setups refer to a setup of stripes appropriate to data I/O size during parallel I/O or efficient running of degraded I/O by diversifying the range of data encoding [49,50].Studies on the minimization of network contention refer to minimization of network bottleneck by dividing a recovery operation into multiple parallel small suboperations [51]. Additionally, in [52], the authors proposed a novel distributed reconstruction technique, called partial parallel repair (PPR), that significantly reduces network transfer time and thus reduces overall reconstruction time for erasure-coded storage systems.In [53], the authors proposed a data placement algorithm named ESet for bringing efficient data recovery in large-scale erasure-coded storage systems.ESet improves data recovery performance over existing random data placement algorithms.In [54], the authors proposed a Hadoop Adaptively Coded Distributed File System (HACFS) as an extension to HDFS, a new erasure-coded storage system that adapts to workload changes by using two different erasure codes-a fast code to optimize the recovery cost of degraded reads and reconstruction of failed disks/nodes, and a compact code to provide low and bounded storage overhead.In [55], the authors proposed a recovery generation scheme (BP scheme) to improve the speed of single disk failure recovery at the stack level.Additionally, the authors proposed a rotated recovery algorithm (RR algorithm) for BP scheme realization to reduce the memory overhead. As described above, although existing studies solved the performance degradation factors of EC based on parallel processing and enabled degraded I/O to some extent, they did not consider I/O load problems between disks occurring when applying EC in the manner of parallel processing, which is expected to result in a large amount of I/O loads between disks during parallel recovery in EC-based distributed file systems.Thus, this study proposes a measure to reduce a large number of tiny amounts of disk I/Os in the EC-based HDFS and discusses the problem that occurs during parallel recovery and proposes a measure to solve this. Motivation In this section, disk I/O problems that occurred in the EC-based HDFS are discussed in detail.First, the basic I/O process in the EC-based HDFS is explained. Figure 1 shows the basic I/O process of the EC-based HDFS.It refers to I/O processing of files through the DataNode after fetching the file layout through the NameNode from a client.The file layout means the file configuration information about data that are stored by clients.The DataNode consists of a queue manager to assign events and masters and workers to perform services, respectively.to improve the speed of single disk failure recovery at the stack level.Additionally, the authors proposed a rotated recovery algorithm (RR algorithm) for BP scheme realization to reduce the memory overhead.As described above, although existing studies solved the performance degradation factors of EC based on parallel processing and enabled degraded I/O to some extent, they did not consider I/O load problems between disks occurring when applying EC in the manner of parallel processing, which is expected to result in a large amount of I/O loads between disks during parallel recovery in EC-based distributed file systems.Thus, this study proposes a measure to reduce a large number of tiny amounts of disk I/Os in the EC-based HDFS and discusses the problem that occurs during parallel recovery and proposes a measure to solve this. Motivation In this section, disk I/O problems that occurred in the EC-based HDFS are discussed in detail.First, the basic I/O process in the EC-based HDFS is explained. Figure 1 shows the basic I/O process of the EC-based HDFS.It refers to I/O processing of files through the DataNode after fetching the file layout through the NameNode from a client.The file layout means the file configuration information about data that are stored by clients.The DataNode consists of a queue manager to assign events and masters and workers to perform services, respectively.When an I/O request of data storage from a client in the EC-based HDFS is sent to the NameNode, the queue manager in the NameNode puts the event generated when the client request occurs into the primary queue.Multiple workers in the DataNode fetch events from the primary queue and call appropriate service functions from the master or slave to process and return the results.Although two requests (write 1, write 2) are generated, only one is processed because a single write request is taken and processed by a single worker for concurrency control. Figure 2 shows how to process multiple file I/Os through multiple clients and DataNodes.It shows the I/O processes of three files concurrently, in which client 1 performs the I/O processes of one file consisting of two block groups, and client 2 performs the I/O processes of two files.As expressed in Figure 1, different block groups can be processed concurrently.However, as mentioned in Figure 2, since requests of the same block group are taken and processed by a single worker for concurrency control, only a total of four workers are running, and the others are accumulated in the primary queue.Thus, when I/O requests for small-sized files are too many, those requests are accumulated in the primary queue, thereby creating a problem of too many waiting I/O requests.When an I/O request of data storage from a client in the EC-based HDFS is sent to the NameNode, the queue manager in the NameNode puts the event generated when the client request occurs into the primary queue.Multiple workers in the DataNode fetch events from the primary queue and call appropriate service functions from the master or slave to process and return the results.Although two requests (write 1, write 2) are generated, only one is processed because a single write request is taken and processed by a single worker for concurrency control. Figure 2 shows how to process multiple file I/Os through multiple clients and DataNodes.It shows the I/O processes of three files concurrently, in which client 1 performs the I/O processes of one file consisting of two block groups, and client 2 performs the I/O processes of two files.As expressed in Figure 1, different block groups can be processed concurrently.However, as mentioned in Figure 2, since requests of the same block group are taken and processed by a single worker for concurrency control, only a total of four workers are running, and the others are accumulated in the primary queue.Thus, when I/O requests for small-sized files are too many, those requests are accumulated in the primary queue, thereby creating a problem of too many waiting I/O requests. Following the explanations in Figure 2, Figure 3 describes a method of distributing and storing cells, which are generated by encoding write 1 by the master inputted through the queue manager in DataNode 1.Following the explanations in Figure 2, Figure 3 describes a method of distributing and storing cells, which are generated by encoding write 1 by the master inputted through the queue manager in DataNode 1. Figure 3 shows a process after the write 1 write request on file 1 from the client in the 2 + 2 structured EC-based HDFS.After generating two data cells and two parity cells from the original data through the encoding process by the master in DataNode 1, those cells are distributed to the slaves through the queue manager of each DataNode and stored in the disks.That is, the master contains the encoding-related and data distribution-related overheads, resulting in its processing time taking too long.In particular, even if there are multiple workers available, multiple requests for the same block group should be processed one at a time sequentially, and the I/O processing time of a single I/O takes a long time due to encoding-related or cell distribution-related overheads, thereby incurring the rapid degradation of I/O performance overall.Following the explanations in Figure 2, Figure 3 describes a method of distributing and storing cells, which are generated by encoding write 1 by the master inputted through the queue manager in DataNode 1. Figure 3 shows a process after the write 1 write request on file 1 from the client in the 2 + 2 structured EC-based HDFS.After generating two data cells and two parity cells from the original data through the encoding process by the master in DataNode 1, those cells are distributed to the slaves through the queue manager of each DataNode and stored in the disks.That is, the master contains the encoding-related and data distribution-related overheads, resulting in its processing time taking too long.In particular, even if there are multiple workers available, multiple requests for the same block group should be processed one at a time sequentially, and the I/O processing time of a single I/O takes a long time due to encoding-related or cell distribution-related overheads, thereby incurring the rapid degradation of I/O performance overall.Figure 3 shows a process after the write 1 write request on file 1 from the client in the 2 + 2 structured EC-based HDFS.After generating two data cells and two parity cells from the original data through the encoding process by the master in DataNode 1, those cells are distributed to the slaves through the queue manager of each DataNode and stored in the disks.That is, the master contains the encoding-related and data distribution-related overheads, resulting in its processing time taking too long.In particular, even if there are multiple workers available, multiple requests for the same block group should be processed one at a time sequentially, and the I/O processing time of a single I/O takes a long time due to encoding-related or cell distribution-related overheads, thereby incurring the rapid degradation of I/O performance overall. Furthermore, when I/O requests are distributed to multiple DataNodes through encoding, they are converted into multiple small-sized I/O requests, including parity blocks, in which performance degradation is worsened because it produces many smallersized I/O requests as blocks are divided further. Table 1 presents the I/O size and the number of I/Os between DataNodes according to EC setting.The 2 + 2 EC setting means that the number of DataNodes that store data cells is two, and the number of DataNodes that store parity cells is two; for 128 KB data, the I/O size of each cell is 64 KB through encoding, and the number of I/Os is four.The 4 + 2 EC setting means that the number of DataNodes that store data cells is four, and the number of DataNodes that store parity cells is two, and for 128 KB data, the I/O size of each cell is 32 KB through encoding, and the number of I/Os is six.Similarly, the 8 + 2 EC setting is expressed as 10 16 KB I/Os.Finally, the 16 + 2 EC setting is expressed to 18 8 KB I/Os, in which overall system performance degradation occurs due to waiting I/Os in the primary queue, which are not processed, as a result of very-tiny-sized I/O requests.Next, the recovery problems occurring in the EC-based HDFS are discussed in detail.The data and parity cells generated in the EC-based HDFS are distributed and stored in different disks, which is similar to the replication case, to ensure availability.As described above, if a single cell is stored in multiple disks, it is classified into clustering storage and nonclustering storage modes according to cell distribution and storage method. The clustering storage mode means that required disks are clustered in advance and cells are distributed and stored only within the cluster when storing cells, which is used in GlusterFS, Ceph, and so forth among various EC-based distributed file systems.The nonclustering storage mode means that all disks are recognized as one cluster, and files are distributed and stored in arbitrary disks; this is typically used in EC-based HDFS. Figure 4 shows examples of clustering and nonclustering storage modes, which display the 4 + 2 EC storage with six DataNodes, each containing four disks. Furthermore, when I/O requests are distributed to multiple DataNodes through encoding, they are converted into multiple small-sized I/O requests, including parity blocks, in which performance degradation is worsened because it produces many smaller-sized I/O requests as blocks are divided further. Table 1 presents the I/O size and the number of I/Os between DataNodes according to EC setting.The 2 + 2 EC setting means that the number of DataNodes that store data cells is two, and the number of DataNodes that store parity cells is two; for 128 KB data, the I/O size of each cell is 64 KB through encoding, and the number of I/Os is four.The 4 + 2 EC setting means that the number of DataNodes that store data cells is four, and the number of DataNodes that store parity cells is two, and for 128 KB data, the I/O size of each cell is 32 KB through encoding, and the number of I/Os is six.Similarly, the 8 + 2 EC setting is expressed as 10 16 KB I/Os.Finally, the 16 + 2 EC setting is expressed to 18 8 KB I/Os, in which overall system performance degradation occurs due to waiting I/Os in the primary queue, which are not processed, as a result of very-tiny-sized I/O requests.Next, the recovery problems occurring in the EC-based HDFS are discussed in detail.The data and parity cells generated in the EC-based HDFS are distributed and stored in different disks, which is similar to the replication case, to ensure availability.As described above, if a single cell is stored in multiple disks, it is classified into clustering storage and nonclustering storage modes according to cell distribution and storage method. The clustering storage mode means that required disks are clustered in advance and cells are distributed and stored only within the cluster when storing cells, which is used in GlusterFS, Ceph, and so forth among various EC-based distributed file systems.The nonclustering storage mode means that all disks are recognized as one cluster, and files are distributed and stored in arbitrary disks; this is typically used in EC-based HDFS.The clustering storage mode in Figure 4 selects one disk only from each of six DataNodes, thereby designating a total of four clusters and distributing and storing cells within the designated groups, whereas the nonclustering storage mode stores cells in arbitrary disks from among all disks. The clustering storage mode easily manages resources and stores files because cell allocation is performed at the cluster level.However, it has a drawback of limiting the recovery performance because recovery is performed only at the cluster level.The nonclus-tering storage mode struggles to manage resources because cells are allocated randomly throughout all disks, but its recovery performance is not limited, because recovery is performed throughout all disks. Figure 5 shows the disk usage in DataNode 1 during single-thread fault recovery when a fault occurs in disk 4 of DataNode 6 in Figure 4. within the designated groups, whereas the nonclustering storage mode stores cells in arbitrary disks from among all disks. The clustering storage mode easily manages resources and stores files because cell allocation is performed at the cluster level.However, it has a drawback of limiting the recovery performance because recovery is performed only at the cluster level.The nonclustering storage mode struggles to manage resources because cells are allocated randomly throughout all disks, but its recovery performance is not limited, because recovery is performed throughout all disks. Figure 5 shows the disk usage in DataNode 1 during single-thread fault recovery when a fault occurs in disk 4 of DataNode 6 in Figure 4.As shown in Figure 5, I/Os occur only in disk 4, which is the same cluster as that of the fault disk during recovery time in the clustering storage mode, whereas I/Os occur in disks 2 and 4 sequentially according to the fault file.Since a single thread performs recovery, recovery performance is similar, but there is a difference in disks utilized during recovery.Figure 6 shows the disk usage in DataNode 1 during multithread fault recovery when a fault occurs in disk 4 of DataNode 6.As shown in Figure 6, since I/O occurs only in disk 4, which is the same cluster as that of the fault disk during recovery in the clustering storage mode, performance improvement was minimal despite of the use of multithreads.The recovery performance increases as the recovery is performed in multithreads in the nonclustering storage mode, and many disks are utilized in the recovery.However, the mean recovery performance is 94 MB/s in the nonclustering storage mode, which is very low compared with the overall As shown in Figure 5, I/Os occur only in disk 4, which is the same cluster as that of the fault disk during recovery time in the clustering storage mode, whereas I/Os occur in disks 2 and 4 sequentially according to the fault file.Since a single thread performs recovery, recovery performance is similar, but there is a difference in disks utilized during recovery.Figure 6 shows the disk usage in DataNode 1 during multithread fault recovery when a fault occurs in disk 4 of DataNode 6. within the designated groups, whereas the nonclustering storage mode stores cells in arbitrary disks from among all disks. The clustering storage mode easily manages resources and stores files because cell allocation is performed at the cluster level.However, it has a drawback of limiting the recovery performance because recovery is performed only at the cluster level.The nonclustering storage mode struggles to manage resources because cells are allocated randomly throughout all disks, but its recovery performance is not limited, because recovery is performed throughout all disks. Figure 5 shows the disk usage in DataNode 1 during single-thread fault recovery when a fault occurs in disk 4 of DataNode 6 in Figure 4.As shown in Figure 5, I/Os occur only in disk 4, which is the same cluster as that of the fault disk during recovery time in the clustering storage mode, whereas I/Os occur in disks 2 and 4 sequentially according to the fault file.Since a single thread performs recovery, recovery performance is similar, but there is a difference in disks utilized during recovery.Figure 6 shows the disk usage in DataNode 1 during multithread fault recovery when a fault occurs in disk 4 of DataNode 6.As shown in Figure 6, since I/O occurs only in disk 4, which is the same cluster as that of the fault disk during recovery in the clustering storage mode, performance improvement was minimal despite of the use of multithreads.The recovery performance increases as the recovery is performed in multithreads in the nonclustering storage mode, and many disks are utilized in the recovery.However, the mean recovery performance is 94 MB/s in the nonclustering storage mode, which is very low compared with the overall As shown in Figure 6, since I/O occurs only in disk 4, which is the same cluster as that of the fault disk during recovery in the clustering storage mode, performance improvement was minimal despite of the use of multithreads.The recovery performance increases as the recovery is performed in multithreads in the nonclustering storage mode, and many disks are utilized in the recovery.However, the mean recovery performance is 94 MB/s in the nonclustering storage mode, which is very low compared with the overall system performance.In Section 4, the efficient data distribution and storage method and parallel recovery technique in the EC-based HDFS are presented. Efficient Data Distribution and Storage Method In this section, I/O buffering and I/O combining methods are described to improve I/O performance of EC in the EC-based HDFS. I/O Buffering In the I/O buffering step, the secondary queue is created for the block group that is processing I/Os, thereby collating I/Os in the queue rather than the standby of I/O request for the same basic block group.Figure 7 shows the processing procedure of the I/O buffering step.system performance.In Section 4, the efficient data distribution and storage method and parallel recovery technique in the EC-based HDFS are presented. Efficient Data Distribution and Storage Method In this section, I/O buffering and I/O combining methods are described to improve I/O performance of EC in the EC-based HDFS. I/O Buffering In the I/O buffering step, the secondary queue is created for the block group that is processing I/Os, thereby collating I/Os in the queue rather than the standby of I/O request for the same basic block group.Figure 7 shows the processing procedure of the I/O buffering step.In the I/O buffering process, it is first checked whether the secondary queue is present.If it is not present, the secondary queue is created, and the I/Os in the queue are added to the secondary queue.If it is present, the requested I/Os are added to the already created secondary queue.Once added, the secondary queue setting is finally turned off.Since standby I/Os in the primary queue can be minimized through I/O buffering, it can prevent the performance degradation of the system.Figure 8 shows the processing method of the I/O buffering step.In the I/O buffering process, it is first checked whether the secondary queue is present.If it is not present, the secondary queue is created, and the I/Os in the queue are added to the secondary queue.If it is present, the requested I/Os are added to the already created secondary queue.Once added, the secondary queue setting is finally turned off.Since standby I/Os in the primary queue can be minimized through I/O buffering, it can prevent the performance degradation of the system.Figure 8 shows the processing method of the I/O buffering step. system performance.In Section 4, the efficient data distribution and storage method and parallel recovery technique in the EC-based HDFS are presented. Efficient Data Distribution and Storage Method In this section, I/O buffering and I/O combining methods are described to improve I/O performance of EC in the EC-based HDFS. I/O Buffering In the I/O buffering step, the secondary queue is created for the block group that is processing I/Os, thereby collating I/Os in the queue rather than the standby of I/O request for the same basic block group.Figure 7 shows the processing procedure of the I/O buffering step.In the I/O buffering process, it is first checked whether the secondary queue is present.If it is not present, the secondary queue is created, and the I/Os in the queue are added to the secondary queue.If it is present, the requested I/Os are added to the already created secondary queue.Once added, the secondary queue setting is finally turned off.Since standby I/Os in the primary queue can be minimized through I/O buffering, it can prevent the performance degradation of the system.Figure 8 shows the processing method of the I/O buffering step.The I/O requests for the same file are not standby, while the first I/O of the file is performed in the I/O buffering step, in which other workers can process those requests.That is, while master 1 performs registration to conduct write 1 in file 1, master 2 fetches write 2 in file 1 and processes it.Here, master 2 also registers that it is performing I/O buffering of the file. Since master 2 verified that master 1 was processing write 1 in file 1 through the file information in process, the secondary queue in relation to the file is created, and write 2 is Appl.Sci.2021, 11, 3298 9 of 23 registered in the secondary queue.Next, the above process is iterated, showing that master 2 fetches write 5. Thus, Figure 8 shows the accumulation of write 2, write 3, and write 4 on file 1 in the secondary queue. I/O Combining The I/O combining step refers to combining I/Os accumulated in the secondary queue through the I/O buffering step and processing them as a single I/O rather than processing them one at a time as single I/Os by workers.Figure 9 shows the processing procedure of the I/O combining step. write 2 in file 1 and processes it.Here, master 2 also registers that it is performing I/O buffering of the file. Since master 2 verified that master 1 was processing write 1 in file 1 through the file information in process, the secondary queue in relation to the file is created, and write 2 is registered in the secondary queue.Next, the above process is iterated, showing that master 2 fetches write 5. Thus, Figure 8 shows the accumulation of write 2, write 3, and write 4 on file 1 in the secondary queue. I/O Combining The I/O combining step refers to combining I/Os accumulated in the secondary queue through the I/O buffering step and processing them as a single I/O rather than processing them one at a time as single I/Os by workers.Figure 9 shows the processing procedure of the I/O combining step.First, whether more than two I/Os are present in the primary queue is checked.If only one I/O is present, standby I/Os in the primary queue are acquired to perform I/O processes and return the process results.After returning the result, it moves to a step to check whether queuing of the data occurs. If more than two I/Os are present in the primary queue, multiple I/Os are acquired from the secondary queue and combined into one using the combining technique.In addition, the combined I/Os are created and combined I/Os are processed, thereby returning the process results.Here, because it is not multiple single I/Os, it can be extended that multiple combined I/Os are returned individually, or combined I/Os are processed in the clients.By verifying whether buffering is progressing for the next block group, standby is performed until the buffering process is terminated.If it is not buffering, whether the secondary queue is filled or not is verified.If it is filled, I/O combining is reperformed.If the secondary queue is empty, the setting that the corresponding block group is processing I/Os is turned off and the processing is terminated. Figure 10 shows the completed processing of write 1, which is owned by master 1, while master 2 is processing write 5. First, whether more than two I/Os are present in the primary queue is checked.If only one I/O is present, standby I/Os in the primary queue are acquired to perform I/O processes and return the process results.After returning the result, it moves to a step to check whether queuing of the data occurs. If more than two I/Os are present in the primary queue, multiple I/Os are acquired from the secondary queue and combined into one using the combining technique.In addition, the combined I/Os are created and combined I/Os are processed, thereby returning the process results.Here, because it is not multiple single I/Os, it can be extended that multiple combined I/Os are returned individually, or combined I/Os are processed in the clients.By verifying whether buffering is progressing for the next block group, standby is performed until the buffering process is terminated.If it is not buffering, whether the secondary queue is filled or not is verified.If it is filled, I/O combining is reperformed.If the secondary queue is empty, the setting that the corresponding block group is processing I/Os is turned off and the processing is terminated. Figure 10 shows the completed processing of write 1, which is owned by master 1, while master 2 is processing write 5. After master 1 returns the processing results of write 1, it verifies whether buffering of the group is progressing and stands by until the buffering is complete.The next step is that master 1 checks the secondary queue after master 2 completes buffering.Figure 11 shows the algorithms applied with I/O buffering and I/O coupling.(2-3) When a client saves a file to store, divide the file into blocks of a certain size and record the metadata in each block through the file layout process.After recording, divide it into block groups and pass it on to the master of the DataNode. (5-11) The master checks the primary queue for more than one requested input/output through the queue manager.If you have more than one master, call another master within the DataNode and record ("1") that secondary queues are in use while generating secondary queues.In addition, the called master performs the I/O buffering process, which adds a queued input/output to the primary queue.When the addition is completed, I/O combining is used to convert to one input/output and return the result. (12-16) If there are fewer than two requested inputs/outputs (one input/output) when checking the primary queue, do not invoke the other master, but take the input/output and process it and return the result. (18-25) After the I/O combining process, go to 5 if I/O buffering is in progress for the block group.If there is no I/O in the secondary queue, record ("0") that the secondary queue is not in use, and I/O buffering is terminated. Table 2 presents the internal I/O processing size according to the combined number of I/Os to be processed through I/O combining of 16 128 KB-sized requests in 16 + 2 EC.As presented in Table 2, when 16 128 KB-sized I/Os are processed one at a time in 16 + 2 EC, a total of 288 8 KB-sized I/Os are needed to be processed, whereas only 18 128 KB-sized I/Os are needed to be processed if 288 I/Os can be combined into multiple sets, each consisting of 16 I/Os. Efficient Data Recovery Method In this section, a parallel recovery technique is introduced that can be used in the EC-based HDFS to improve recovery performance by overcoming the recovery problem occurring in the EC-based HDFS (described in Section 2). Distribution of Disk I/O Loads Because a single file in the EC-based HDFS is divided into multiple data and parity cells and stored, it is expected to have a distribution of disk I/O loads during parallel recovery.In addition, contention is more likely to occur in the limited disks as more multiple recovery workers are operated to increase the small-scale distributed file system or parallel recovery performance, which requires high space efficiency, although the numbers of DataNodes or disks are small.In particular, when block-level processing is required, such as in fault recovery time, disk I/O loads would be worse.In this section, a method for improving the parallel recovery performance is explained by avoiding I/O loads in disks, which are limited resources at the time of parallel recovery.Figure 12 shows an example of a disk load when parallel recovery is performed.The file information to be recovered in the recovery queue is sequentially accumulated in the NameNode, in which file 1 is recovering in the current recovery worker 1 and file 2 is recovering in the recovery worker 2 in parallel.File 3 and file 4, which are recovered next, are present in the recovery queue, and the recovery worker that completed the recovery fetches the next file information in the recovery queue and performs the recovery.Once recovery worker 1 fetches the recovery information about file 1 and requests the recovery from DataNode 1, which is the master of file 1, one of the workers in DataNode 1 is designated as the master and starts the recovery.In addition, once recovery worker 2 fetches the recovery information about file 2 and requests the recovery from DataNode 6, which is the master of file 2, one of the workers in DataNode 6 is designated as the master and starts the recovery.Thus, disk loads occur due to the concurrent access to disk 2 of DataNode 3. The disk I/O load distribution technique proposed in this study is a method in which the load information table is placed to check the access loads of the disks in the NameNode, and recovery is performed only when the disk to be accessed in response to the recovery request can bear the load.To distribute the disk I/O loads in this study, only one I/O was set to be allowed in a single disk at the time of fault recovery.Thus, whether or not the recovery was pursued was determined after checking the availability of the disk to be accessed before the recovery request from the recovery worker, and the disk accessed in response to the recovery request is marked as "in use" and then as "completed" when the recovery is complete.This marking method was applied to the EC-based HDFS.In addition, a method to check the disk availability prior to recovery initiation was applied.Figure 13 shows the core part in the algorithm where the Once recovery worker 1 fetches the recovery information about file 1 and requests the recovery from DataNode 1, which is the master of file 1, one of the workers in DataNode 1 is designated as the master and starts the recovery.In addition, once recovery worker 2 fetches the recovery information about file 2 and requests the recovery from DataNode 6, which is the master of file 2, one of the workers in DataNode 6 is designated as the master and starts the recovery.Thus, disk loads occur due to the concurrent access to disk 2 of DataNode 3. The disk I/O load distribution technique proposed in this study is a method in which the load information table is placed to check the access loads of the disks in the NameNode, and recovery is performed only when the disk to be accessed in response to the recovery request can bear the load.To distribute the disk I/O loads in this study, only one I/O was set to be allowed in a single disk at the time of fault recovery.Thus, whether or not the recovery was pursued was determined after checking the availability of the disk to be accessed before the recovery request from the recovery worker, and the disk accessed in response to the recovery request is marked as "in use" and then as "completed" when the recovery is complete.This marking method was applied to the EC-based HDFS.In addition, a method to check the disk availability prior to recovery initiation was applied.Figure 13 shows the core part in the algorithm where the disk I/O load distribution technique is applied.(2-9) The EC-based HDFS fetches the file information to be recovered from the recovery queue, as well as the layout of the file through the metadata repository, thereby checking whether the ID of the block group to be recovered is the same.(11)(12)(13)(14)(15)(16)(17)(18)(19) After checking the block group ID, the status of the master that will start the recovery of the block group is checked.If the status is not normal, a new block within the block group is designated as the master, or a new block is assigned to replace the fault block. (21-27) The related disk availability for each block to be recovered is checked.If the disk is "in-use (1)," the recovery information in process is put into the recovery queue, and the process is performed again from the beginning.(29)(30)(31)(32)(33)(34)(35)(36) The recovery starts when the recovery request is entered from the master.Once the recovery is complete, the in-use disk is marked as "not using (0)."(38) Finally, the layout of the modified file is stored in the metadata repository and renewed with the new information. Figure 14 shows the parallel recovery method by two recovery workers according to the disk I/O load distribution technique in the same environment as that in Figure 12 after applying the disk I/O load distribution algorithm.(2-9) The EC-based HDFS fetches the file information to be recovered from the recovery queue, as well as the layout of the file through the metadata repository, thereby checking whether the ID of the block group to be recovered is the same.(11)(12)(13)(14)(15)(16)(17)(18)(19) After checking the block group ID, the status of the master that will start the recovery of the block group is checked.If the status is not normal, a new block within the block group is designated as the master, or a new block is assigned to replace the fault block.(21)(22)(23)(24)(25)(26)(27) The related disk availability for each block to be recovered is checked.If the disk is "in-use (1)," the recovery information in process is put into the recovery queue, and the process is performed again from the beginning.(29)(30)(31)(32)(33)(34)(35)(36) The recovery starts when the recovery request is entered from the master.Once the recovery is complete, the in-use disk is marked as "not using (0)."(38) Finally, the layout of the modified file is stored in the metadata repository and renewed with the new information. Figure 14 shows the parallel recovery method by two recovery workers according to the disk I/O load distribution technique in the same environment as that in Figure 12 after applying the disk I/O load distribution algorithm.Recovery worker 1 and recovery worker 2 fetch the recovery information of file 1 and file 2, respectively, sequentially.After recovery worker 1 checks the availability of the disk needed for the recovery of file 1 first, the disk is set to "in-use," and the recovery request is sent to the master.Next, recovery worker 2 checks the availability of the disk needed to recover file 2. Here, if recovery worker 2 checks that the disk 2 of DataNode 3 is "in-use," it reinserts file 2 into the recovery queue and fetches the next standby file 3.After recovery worker 2 checks the availability of the disk needed for the recovery of file 3, the disk is set to "in-use," and the recovery request is sent to the master. Random Block Placement Block placement refers to the designation of specific disks from available disks and the creation of new blocks.The data and parity cell allocation is performed according to the designated rule to improve data availability and I/O performance based on the volume setup in the EC-based HDFS. Figure 15 describes the rules for creating new blocks. 1. Block allocation is performed at a block group level.2. Blocks included in the same block group must be stored in a different disk.3. Blocks included in the same block group should be stored in a different DataNode as much as possible.In the clustering storage mode, a disk that belongs to the designated cluster is the target of block placement.On the other hand, all disks are the targets of block placement in the nonclustering storage mode.The sequential block placement method proposed in this section is a method for placing a block in the DataNode and disk according to a determined order, and the random block placement is a method for randomly placing a block in the DataNode and disk.Recovery worker 1 and recovery worker 2 fetch the recovery information of file 1 and file 2, respectively, sequentially.After recovery worker 1 checks the availability of the disk needed for the recovery of file 1 first, the disk is set to "in-use," and the recovery request is sent to the master.Next, recovery worker 2 checks the availability of the disk needed to recover file 2. Here, if recovery worker 2 checks that the disk 2 of DataNode 3 is "in-use," it reinserts file 2 into the recovery queue and fetches the next standby file 3.After recovery worker 2 checks the availability of the disk needed for the recovery of file 3, the disk is set to "in-use," and the recovery request is sent to the master. Random Block Placement Block placement refers to the designation of specific disks from available disks and the creation of new blocks.The data and parity cell allocation is performed according to the designated rule to improve data availability and I/O performance based on the volume setup in the EC-based HDFS. Figure 15 describes the rules for creating new blocks.Recovery worker 1 and recovery worker 2 fetch the recovery information of file 1 and file 2, respectively, sequentially.After recovery worker 1 checks the availability of the disk needed for the recovery of file 1 first, the disk is set to "in-use," and the recovery request is sent to the master.Next, recovery worker 2 checks the availability of the disk needed to recover file 2. Here, if recovery worker 2 checks that the disk 2 of DataNode 3 is "in-use," it reinserts file 2 into the recovery queue and fetches the next standby file 3.After recovery worker 2 checks the availability of the disk needed for the recovery of file 3, the disk is set to "in-use," and the recovery request is sent to the master. Random Block Placement Block placement refers to the designation of specific disks from available disks and the creation of new blocks.The data and parity cell allocation is performed according to the designated rule to improve data availability and I/O performance based on the volume setup in the EC-based HDFS. Figure 15 describes the rules for creating new blocks. 1. Block allocation is performed at a block group level.2. Blocks included in the same block group must be stored in a different disk.3. Blocks included in the same block group should be stored in a different DataNode as much as possible.In the clustering storage mode, a disk that belongs to the designated cluster is the target of block placement.On the other hand, all disks are the targets of block placement in the nonclustering storage mode.The sequential block placement method proposed in this section is a method for placing a block in the DataNode and disk according to a determined order, and the random block placement is a method for randomly placing a block in the DataNode and disk.In the clustering storage mode, a disk that belongs to the designated cluster is the target of block placement.On the other hand, all disks are the targets of block placement in the nonclustering storage mode.The sequential block placement method proposed in this section is a method for placing a block in the DataNode and disk according to a determined order, and the random block placement is a method for randomly placing a block in the DataNode and disk.Since the sequential block placement technique places blocks according to the determined order, block placement is convenient and blocks are likely to be placed uniformly.However, the random block placement technique requires checking whether blocks are placed in different DataNodes, and disks and blocks are likely to be placed more in specific DataNodes and disks.Thus, the sequential block allocation technique may be advantageous in a general I/O condition, because its block displacement is simple and loads distributed as blocks are distributed evenly.However, since blocks are placed according to the determined order in the sequential block placement, the probability of using blocks stored in the disks is very low at the time of fault recovery. If a fault occurs in disk 1 of DataNode 1, the number of blocks accessible to recover the fault will differ with the block placement method.In the sequential block placement technique, only three disk stripes, (DataNode 2, Disk 1), (DataNode 3, Disk 1), and (DataNode 4, Disk 1), are utilized to recover cells 1-1 and 3-1 at the time of file recovery.In contrast, in the random block placement technique, five disks can be utilized, (DataNode 3, Disk 1/Disk 2), (DataNode 4, Disk 1/Disk 2), and (DataNode 2, Disk 2), at the time of file recovery.Thus, since the resources that can be utilized at the time of fault recovery are limited depending on the block placement technique used, it is important to utilize the random block placement technique in which as many resources can participate in the recovery as possible for efficient parallel recovery in EC. Matrix Recycle The decoding task that recovers lost data in EC is performed in the following order: data acquisition → decoding matrix creation → decoding process.Data acquisition refers to a step to read available data required for decoding.Decoding matrix creation refers to a step to create a decoding matrix required for decoding according to the EC setup and fault location.Finally, decoding refers to a step to perform decoding using the acquired data and decoding matrix. In particular, the required number of matrices for decoding differs according to EC attributes, such as EC, number of data divisions, number of created parities, and fault location and number.For example, if a single disk fault occurs in EC with four data cells Since the sequential block placement technique places blocks according to the determined order, block placement is convenient and blocks are likely to be placed uniformly.However, the random block placement technique requires checking whether blocks are placed in different DataNodes, and disks and blocks are likely to be placed more in specific DataNodes and disks.Thus, the sequential block allocation technique may be advantageous in a general I/O condition, because its block displacement is simple and loads distributed as blocks are distributed evenly.However, since blocks are placed according to the determined order in the sequential block placement, the probability of using blocks stored in the disks is very low at the time of fault recovery. If a fault occurs in disk 1 of DataNode 1, the number of blocks accessible to recover the fault will differ with the block placement method.In the sequential block placement technique, only three disk stripes, (DataNode 2, Disk 1), (DataNode 3, Disk 1), and (DataNode 4, Disk 1), are utilized to recover cells 1-1 and 3-1 at the time of file recovery.In contrast, in the random block placement technique, five disks can be utilized, (DataNode 3, Disk 1/Disk 2), (DataNode 4, Disk 1/Disk 2), and (DataNode 2, Disk 2), at the time of file recovery.Thus, since the resources that can be utilized at the time of fault recovery are limited depending on the block placement technique used, it is important to utilize the random block placement technique in which as many resources can participate in the recovery as possible for efficient parallel recovery in EC. Matrix Recycle The decoding task that recovers lost data in EC is performed in the following order: data acquisition → decoding matrix creation → decoding process.Data acquisition refers to a step to read available data required for decoding.Decoding matrix creation refers to a step to create a decoding matrix required for decoding according to the EC setup and fault location.Finally, decoding refers to a step to perform decoding using the acquired data and decoding matrix. In particular, the required number of matrices for decoding differs according to EC attributes, such as EC, number of data divisions, number of created parities, and fault location and number.For example, if a single disk fault occurs in EC with four data cells and two parity cells in the EC-based HDFS using a single volume, the required number of matrices for decoding is six. In Figure 17, cells D1 to D4 are data cells and P1 and P2 are parity cells.The red color indicates the fault location, and a total of six decoding matrices are required according to the fault location in a single fault.Table 3 presents the number of matrices occurring during multiple faults according to the K + M value. Appl.Sci.2021, 11, x FOR PEER REVIEW 16 of 24 and two parity cells in the EC-based HDFS using a single volume, the required number of matrices for decoding is six. In Figure 17, cells D1 to D4 are data cells and P1 and P2 are parity cells.The red color indicates the fault location, and a total of six decoding matrices are required according to the fault location in a single fault.Table 3 presents the number of matrices occurring during multiple faults according to the K + M value.As presented in Table 3, as the K + M value increases, the number of used matrices increases.In particular, as the number of data divisions increases, the number of matrices increases exponentially.Thus, as the used EC volumes are various and the number of fault tolerances increases, the required number of decoding matrices increases. However, with the same fault attributes, the used matrices are the same.For example, for the same EC and the same fault index in multiple volumes with a 4 + 2 attribute, the used matrices are the same.That is, if the number of data divisions, the number of parities, erasure code, and stripe size are the same, even if the volume is different in the EC structure, then the used matrices are the same for the fault in the same location.Thus, this study minimizes the time to create matrices for decoding by recycling the matrices if the same fault attributes are found after registering the created matrices in the pool. Figure 18 shows the overall data structure used to recycle matrices.As presented in Table 3, as the K + M value increases, the number of used matrices increases.In particular, as the number of data divisions increases, the number of matrices increases exponentially.Thus, as the used EC volumes are various and the number of fault tolerances increases, the required number of decoding matrices increases. However, with the same fault attributes, the used matrices are the same.For example, for the same EC and the same fault index in multiple volumes with a 4 + 2 attribute, the used matrices are the same.That is, if the number of data divisions, the number of parities, erasure code, and stripe size are the same, even if the volume is different in the EC structure, then the used matrices are the same for the fault in the same location.Thus, this study minimizes the time to create matrices for decoding by recycling the matrices if the same fault attributes are found after registering the created matrices in the pool. Figure 18 shows the overall data structure used to recycle matrices. and two parity cells in the EC-based HDFS using a single volume, the required number of matrices for decoding is six. In Figure 17, cells D1 to D4 are data cells and P1 and P2 are parity cells.The red color indicates the fault location, and a total of six decoding matrices are required according to the fault location in a single fault.Table 3 presents the number of matrices occurring during multiple faults according to the K + M value.As presented in Table 3, as the K + M value increases, the number of used matrices increases.In particular, as the number of data divisions increases, the number of matrices increases exponentially.Thus, as the used EC volumes are various and the number of fault tolerances increases, the required number of decoding matrices increases. However, with the same fault attributes, the used matrices are the same.For example, for the same EC and the same fault index in multiple volumes with a 4 + 2 attribute, the used matrices are the same.That is, if the number of data divisions, the number of parities, erasure code, and stripe size are the same, even if the volume is different in the EC structure, then the used matrices are the same for the fault in the same location.Thus, this study minimizes the time to create matrices for decoding by recycling the matrices if the same fault attributes are found after registering the created matrices in the pool. Figure 18 shows the overall data structure used to recycle matrices.Figure 19 shows the data structure comprising the matrix.struct matrix_pool, which represents the overall structure, has struct ec_table consisting of encoding information and ma-trix.struct ec_table contains struct dc_table consisting of decoding information and matrix, which are related to the corresponding encoding information.Figure 19 shows the data structure comprising the matrix.struct matrix_pool, which represents the overall structure, has struct ec_table consisting of encoding information and matrix.struct ec_table contains struct dc_table consisting of decoding information and matrix, which are related to the corresponding encoding information.struct matrix_pool consists of ec_tables_count, which represents the number of ec_table, and ec_tables, which represent the list of ec_tables. In struct ec_table, the number of data divisions (ec_k), the number of parities (ec_m), the EC type to be used (ec_type), the striping size (ec_striping_size), and the encode matrix (ec_matrix) are required information for encoding, and the number of dc_tables (dc_tables_count) and the list of dc_tables (dc_tables) are decoding-related information. struct dc_table stores information and matrix required for decoding, which consists of information to check the fault location (err_hash) and related decoding matrix (dc_matrix).err_hash value in dc_table is a hash value calculated based on the fault count and location, which is used to identify whether the fault occurs in the same location. Figure 19 shows the matrix recycle step.When decoding is requested, ec_table, whose ec_k, ec_m, ec_type, and ec_stripe_size are the same in matrix_pool, is searched.If ec_table is found, err_hash is calculated based on fault information, and dc_table whose err_hash is the same in ds_table_list registered in ec_table is searched.If dc_table is found, decoding is performed using dc_matrix. If ec_table is not found, ec_table is created and registered in matrix_pool.Next, err_hash is calculated based on the next fault information, and dc_table is created and registered in ec_table followed by performing decoding. If ec_table is found but the related dc_table is not found, dc_matrix is created based on encoding information and ec_matrix and registered in ec_table followed by performing decoding using dc_matrix. Performance Evaluation In this section, the superiority of the storage and recovery techniques proposed for EC-based HDFS in this study is verified through performance evaluation. The EC volume used in performance evaluation consists of one NameNode and six DataNodes with 4 + 2 EC.The EC-based HDFS was installed as follows: in each node, Intel CPU i7-7700 3.60 Mhz, 16 G memory, one 7200 rpm HDD, and 1 G Ethernet were struct matrix_pool consists of ec_tables_count, which represents the number of ec_table, and ec_tables, which represent the list of ec_tables. In struct ec_table, the number of data divisions (ec_k), the number of parities (ec_m), the EC type to be used (ec_type), the striping size (ec_striping_size), and the encode matrix (ec_matrix) are required information for encoding, and the number of dc_tables (dc_tables_count) and the list of dc_tables (dc_tables) are decoding-related information. struct dc_table stores information and matrix required for decoding, which consists of information to check the fault location (err_hash) and related decoding matrix (dc_matrix).err_hash value in dc_table is a hash value calculated based on the fault count and location, which is used to identify whether the fault occurs in the same location. Figure 19 shows the matrix recycle step.When decoding is requested, ec_table, whose ec_k, ec_m, ec_type, and ec_stripe_size are the same in matrix_pool, is searched.If ec_table is found, err_hash is calculated based on fault information, and dc_table whose err_hash is the same in ds_table_list registered in ec_table is searched.If dc_table is found, decoding is performed using dc_matrix. If ec_table is not found, ec_table is created and registered in matrix_pool.Next, err_hash is calculated based on the next fault information, and dc_table is created and registered in ec_table followed by performing decoding. If ec_table is found but the related dc_table is not found, dc_matrix is created based on encoding information and ec_matrix and registered in ec_table followed by performing decoding using dc_matrix. Performance Evaluation In this section, the superiority of the storage and recovery techniques proposed for EC-based HDFS in this study is verified through performance evaluation. The EC volume used in performance evaluation consists of one NameNode and six DataNodes with 4 + 2 EC.The EC-based HDFS was installed as follows: in each node, Intel CPU i7-7700 3.60 Mhz, 16 G memory, one 7200 rpm HDD, and 1 G Ethernet were mounted, running over the Ubuntu 16.04.01operating system.The proposed methods in this study were applied, and performance comparison was conducted. Performance of Data Distribution Storage The data distribution storage performance compared the basic EC-based HDFS with the I/O buffering technique and I/O combining technique-applied EC HDFS-BC proposed in this study. Figure 20 shows the disk throughput when storing 100 GB sample data to each distribution file system after creating the sample data in the EC-based HDFS and EC HDFS-BC systems.The EC-based HDFS showed a slight performance increase though, but no significant change was shown throughout the experiment, whereas the EC HDFS-BC system exhibited a high write throughput as it closed to the file storage completion, which was improved about 2.5-fold compared with that of the EC-based HDFS. mounted, running over the Ubuntu 16.04.01operating system.The proposed methods in this study were applied, and performance comparison was conducted. Performance of Data Distribution Storage The data distribution storage performance compared the basic EC-based HDFS with the I/O buffering technique and I/O combining technique-applied EC HDFS-BC proposed in this study. Figure 20 shows the disk throughput when storing 100 GB sample data to each distribution file system after creating the sample data in the EC-based HDFS and EC HDFS-BC systems.The EC-based HDFS showed a slight performance increase though, but no significant change was shown throughout the experiment, whereas the EC HDFS-BC system exhibited a high write throughput as it closed to the file storage completion, which was improved about 2.5-fold compared with that of the EC-based HDFS. Figure 21 shows a performance comparison when storing sample data with different sizes in the EC HDFS and EC HDFS-BC systems.No significant difference was exhibited when storing 10 GB data, and when storing 50 GB data, the EC HDFS-BC system stored data about 1.3 times faster than the existing EC HDFS.When storing 100 GB data, the storage time of EC HDFS-BC was about twofold faster.The above experiment results show that if larger-size data are stored, more stark difference in storage time would be expected.Figure 21 shows a performance comparison when storing sample data with different sizes in the EC HDFS and EC HDFS-BC systems.No significant difference was exhibited when storing 10 GB data, and when storing 50 GB data, the EC HDFS-BC system stored data about 1.3 times faster than the existing EC HDFS.When storing 100 GB data, the storage time of EC HDFS-BC was about twofold faster.The above experiment results show that if larger-size data are stored, more stark difference in storage time would be expected. Data Recovery Performance The data recovery performance was compared between the basic EC-based HDFS and disk I/O load distribution and random block placement and the matrix recycle-applied EC based HDFS, which was proposed in this study.The I/O load distribution techniqueapplied EC-based HDFS was named EC HDFS-LR (load reducing). In the experiment, all file systems employed the same 4 + 2 EC volume, and the recovery performance according to changes in recovery threads was measured when one disk failed.In addition, the EC HDFS and EC HDFS-LR employed nonclustering block placement using the round robin algorithm when placing the blocks. As shown in Figure 22, the performance of EC HDFS was slightly improved as the number of recovery threads increased.The performance of the EC HDFS-LR was rapidly improved until the number of recovery threads was five, which started the reduction of improvement, and after six threads, the performance did not improve.However, the performance of the EC HDFS-LR was about two times better than that of the EC HDFS. Figure 23 shows the disk usage in a specific DataNode when running six recovery threads in the EC HDFS and in the EC HDFS-LR. Data Recovery Performance The data recovery performance was compared between the basic EC-based HDFS and disk I/O load distribution and random block placement and the matrix recycle-applied EC based HDFS, which was proposed in this study.The I/O load distribution technique-applied EC-based HDFS was named EC HDFS-LR (load reducing). In the experiment, all file systems employed the same 4 + 2 EC volume, and the recovery performance according to changes in recovery threads was measured when one disk failed.In addition, the EC HDFS and EC HDFS-LR employed nonclustering block placement using the round robin algorithm when placing the blocks. As shown in Figure 22, the performance of EC HDFS was slightly improved as the number of recovery threads increased.The performance of the EC HDFS-LR was rapidly improved until the number of recovery threads was five, which started the reduction of improvement, and after six threads, the performance did not improve.However, the performance of the EC HDFS-LR was about two times better than that of the EC HDFS. Figure 23 shows the disk usage in a specific DataNode when running six recovery threads in the EC HDFS and in the EC HDFS-LR.As shown in Figure 23, the recovery performance was 50 MB/s on average in the EC HDFS, whereas more disks were utilized in the recovery than that of the EC HDFS.Thus, the recovery performance of the EC HDFS-LR was around 180 MB/s, which was 2.5 times larger than that of the EC HDFS. Figure 24 shows the performance comparison when applying the sequential and random block placement techniques to the EC HDFS and EC HDFS-LR.When applying the sequential block placement, performance was improved by about 40% compared with As shown in Figure 23, the recovery performance was 50 MB/s on average in the EC HDFS, whereas more disks were utilized in the recovery than that of the EC HDFS.Thus, the recovery performance of the EC HDFS-LR was around 180 MB/s, which was 2.5 times larger than that of the EC HDFS. Figure 24 shows the performance comparison when applying the sequential and random block placement techniques to the EC HDFS and EC HDFS-LR.When applying the sequential block placement, performance was improved by about 40% compared with that of applying the random block placement.In particular, when the random block placement was applied to the EC HDFS-LR, the performance was improved further.As shown in Figure 23, the recovery performance was 50 MB/s on average in the EC HDFS, whereas more disks were utilized in the recovery than that of the EC HDFS.Thus, the recovery performance of the EC HDFS-LR was around 180 MB/s, which was 2.5 times larger than that of the EC HDFS. Figure 24 shows the performance comparison when applying the sequential and random block placement techniques to the EC HDFS and EC HDFS-LR.When applying the sequential block placement, performance was improved by about 40% compared with that of applying the random block placement.In particular, when the random block placement was applied to the EC HDFS-LR, the performance was improved further.The memory size in the structure where the K + M EC volume is used increases as the K value increases.The memory size used increases as the number of faults increases in the same EC attributes, because the number of used matrices becomes large.However, the used memory size is 140 KB in the 8 + 2 EC volume structure.That is, a severe problem does not occur even if matrices are recycled in the matrix pool because the matrix size is small. Figure 26 shows the results of 100,000 fault recoveries when one disk fault and two The memory size in the structure where the K + M EC volume is used increases as the K value increases.The memory size used increases as the number of faults increases in the same EC attributes, because the number of used matrices becomes large.However, the used memory size is 140 KB in the 8 + 2 EC volume structure.That is, a severe problem does not occur even if matrices are recycled in the matrix pool because the matrix size is small. Figure 26 shows the results of 100,000 fault recoveries when one disk fault and two disk faults occur randomly in the 4 + 2 EC volume structure.The memory size in the structure where the K + M EC volume is used increases as the K value increases.The memory size used increases as the number of faults increases in the same EC attributes, because the number of used matrices becomes large.However, the used memory size is 140 KB in the 8 + 2 EC volume structure.That is, a severe problem does not occur even if matrices are recycled in the matrix pool because the matrix size is small. Figure 26 shows the results of 100,000 fault recoveries when one disk fault and two disk faults occur randomly in the 4 + 2 EC volume structure.When a single disk fault occurred, 100,000 repetitions of fault recovery were performed, and the recovery time when recycling the matrices was about 2.6 times faster.When two disk faults occurred, 100,000 repetitions of fault recovery were performed, and the recovery time when recycling the matrices was about three times faster.That is, a faster recovery time can be ensured if two or more disk faults occur and the volume of the EC system is set to large.When a single disk fault occurred, 100,000 repetitions of fault recovery were performed, and the recovery time when recycling the matrices was about 2.6 times faster.When two disk faults occurred, 100,000 repetitions of fault recovery were performed, and the recovery time when recycling the matrices was about three times faster.That is, a faster recovery time can be ensured if two or more disk faults occur and the volume of the EC system is set to large. Conclusions Because the EC-based distributed file system among various storage technologies stores data cells by creating and storing parity cells through encoding, it has high space efficiency compared with replication methods.However, the EC-based distributed file system significantly degrades the performance because of disk I/O loads occurring when storing files and a large number of block accesses when recovering files. Thus, this study selected the HDFS, one of the EC-based distributed file systems, and proposed efficient file storage and recovery methods.The buffering and combining techniques were utilized in file storage, thereby improving the performance about 2.5-fold compared with that of existing HDFSs.For file recovery, performance improved about 2-fold by utilizing disk I/O load distribution, random block placement, and matrix recycle technique. Figure 2 . Figure 2. Multiple I/O process in the EC-based HDFS. Figure 3 . Figure 3. Process of distributing blocks in the EC-based HDFS system. Figure 2 . Figure 2. Multiple I/O process in the EC-based HDFS. Figure 2 . Figure 2. Multiple I/O process in the EC-based HDFS. Figure 3 . Figure 3. Process of distributing blocks in the EC-based HDFS system. Figure 3 . Figure 3. Process of distributing blocks in the EC-based HDFS system. Figure 4 shows examples of clustering and nonclustering storage modes, which display the 4 + 2 EC storage with six DataNodes, each containing four disks. Figure 4 . Figure 4. Examples of clustering (left) and nonclustering (right) storage modes.The clustering storage mode in Figure4selects one disk only from each of six DataNodes, thereby designating a total of four clusters and distributing and storing cells Figure 8 . Figure 8. Method in the I/O buffering step.The I/O requests for the same file are not standby, while the first I/O of the file is performed in the I/O buffering step, in which other workers can process those requests.That is, while master 1 performs registration to conduct write 1 in file 1, master 2 fetches Figure 8 . Figure 8. Method in the I/O buffering step.The I/O requests for the same file are not standby, while the first I/O of the file is performed in the I/O buffering step, in which other workers can process those requests.That is, while master 1 performs registration to conduct write 1 in file 1, master 2 fetches Figure 8 . Figure 8. Method in the I/O buffering step. If multiple I/Os are accumulated in the secondary queue, they are combined into the designated unit to be processed as a single I/O.The final step is to fetch four I/Os from the secondary queue and integrate them into a single combined I/O to process.Multiple I/Os can be processed as one through I/O combining, which increases I/O size, and the number of I/Os decreases, resulting in significant improvements in I/O efficiency due to the mitigation of network loads and contention overall. Figure 11 24 Figure 10 . Figure 10.Method in the I/O combining step.After master 1 returns the processing results of write 1, it verifies whether buffering of the group is progressing and stands by until the buffering is complete.The next step is that master 1 checks the secondary queue after master 2 completes buffering.If multiple I/Os are accumulated in the secondary queue, they are combined into the designated unit to be processed as a single I/O.The final step is to fetch four I/Os from the secondary queue and integrate them into a single combined I/O to process.Multiple I/Os can be processed as one through I/O combining, which increases I/O size, and the number of I/Os decreases, resulting in significant improvements in I/O efficiency due to the mitigation of network loads and contention overall.Figure11shows the algorithms applied with I/O buffering and I/O coupling. Figure 10 . Figure 10.Method in the I/O combining step. Figure 10 . Figure 10.Method in the I/O combining step.After master 1 returns the processing results of write 1, it verifies whether buffering of the group is progressing and stands by until the buffering is complete.The next step is that master 1 checks the secondary queue after master 2 completes buffering.If multiple I/Os are accumulated in the secondary queue, they are combined into the designated unit to be processed as a single I/O.The final step is to fetch four I/Os from the secondary queue and integrate them into a single combined I/O to process.Multiple I/Os can be processed as one through I/O combining, which increases I/O size, and the number of I/Os decreases, resulting in significant improvements in I/O efficiency due to the mitigation of network loads and contention overall.Figure11shows the algorithms applied with I/O buffering and I/O coupling. Figure 12 . Figure 12.Example of disk loads due to parallel recovery. Figure 12 . Figure 12.Example of disk loads due to parallel recovery. Figure 14 . Figure 14.Example of disk contention avoidance due to parallel recovery. Figure 15 . Figure 15.Rules for creating a new block. Figure 14 . Figure 14.Example of disk contention avoidance due to parallel recovery. 24 Figure 14 . Figure 14.Example of disk contention avoidance due to parallel recovery. Figure 15 . Figure 15.Rules for creating a new block. Figure 15 . Figure 15.Rules for creating a new block. Figure 16 24 Figure 16 Figure 16 shows examples of sequential and random block placement techniques used in the 2 + 2 EC-based HDFS, consisting of four DataNodes each with two disks.Four cells are stored in the 2 + 2 EC volume. Figure 16 . Figure 16.Examples of sequential (left) and random (right) block placement disk fault.According to the basic rule of the block placement, four cells are stored in a different DataNode and disk.Blocks are placed sequentially in the order (DataNode 1, Disk 1) → (DataNode 2, Disk 1) → (DataNode 3, Disk 1) → (DataNode 4, Disk 1) in the sequential block placement technique, and blocks are randomly placed in the DataNodes and disks in the random block placement technique.Since the sequential block placement technique places blocks according to the determined order, block placement is convenient and blocks are likely to be placed uniformly.However, the random block placement technique requires checking whether blocks are placed in different DataNodes, and disks and blocks are likely to be placed more in specific DataNodes and disks.Thus, the sequential block allocation technique may be advantageous in a general I/O condition, because its block displacement is simple and loads distributed as blocks are distributed evenly.However, since blocks are placed according to the determined order in the sequential block placement, the probability of using blocks stored in the disks is very low at the time of fault recovery.If a fault occurs in disk 1 of DataNode 1, the number of blocks accessible to recover the fault will differ with the block placement method.In the sequential block placement technique, only three disk stripes, (DataNode 2, Disk 1), (DataNode 3, Disk 1), and (DataNode 4, Disk 1), are utilized to recover cells 1-1 and 3-1 at the time of file recovery.In contrast, in the random block placement technique, five disks can be utilized, (DataNode 3, Disk 1/Disk 2), (DataNode 4, Disk 1/Disk 2), and (DataNode 2, Disk 2), at the time of file recovery.Thus, since the resources that can be utilized at the time of fault recovery are limited depending on the block placement technique used, it is important to utilize the random block placement technique in which as many resources can participate in the recovery as possible for efficient parallel recovery in EC. Figure 16 . Figure 16.Examples of sequential (left) and random (right) block placement disk fault.According to the basic rule of the block placement, four cells are stored in a different DataNode and disk.Blocks are placed sequentially in the order (DataNode 1, Disk 1) → (DataNode 2, Disk 1) → (DataNode 3, Disk 1) → (DataNode 4, Disk 1) in the sequential block placement technique, and blocks are randomly placed in the DataNodes and disks in the random block placement technique.Since the sequential block placement technique places blocks according to the determined order, block placement is convenient and blocks are likely to be placed uniformly.However, the random block placement technique requires checking whether blocks are placed in different DataNodes, and disks and blocks are likely to be placed more in specific DataNodes and disks.Thus, the sequential block allocation technique may be advantageous in a general I/O condition, because its block displacement is simple and loads distributed as blocks are distributed evenly.However, since blocks are placed according to the determined order in the sequential block placement, the probability of using blocks stored in the disks is very low at the time of fault recovery.If a fault occurs in disk 1 of DataNode 1, the number of blocks accessible to recover the fault will differ with the block placement method.In the sequential block placement technique, only three disk stripes, (DataNode 2, Disk 1), (DataNode 3, Disk 1), and (DataNode 4, Disk 1), are utilized to recover cells 1-1 and 3-1 at the time of file recovery.In contrast, in the random block placement technique, five disks can be utilized, (DataNode 3, Disk 1/Disk 2), (DataNode 4, Disk 1/Disk 2), and (DataNode 2, Disk 2), at the time of file recovery.Thus, since the resources that can be utilized at the time of fault recovery are limited depending on the block placement technique used, it is important to utilize the random block placement technique in which as many resources can participate in the recovery as possible for efficient parallel recovery in EC. Figure 18 . Figure 18.Data structure to compose the matrix. Figure 18 . Figure 18.Data structure to compose the matrix.Figure 18.Data structure to compose the matrix. Figure 18 . Figure 18.Data structure to compose the matrix.Figure 18.Data structure to compose the matrix. Figure 20 . Figure 20.Comparison between EC-based HDFS and improvement-applied performances. Figure 20 . Figure 20.Comparison between EC-based HDFS and improvement-applied performances. 24 Figure 21 . Figure 21.Difference in storage time according to file size.6.2.Data Recovery PerformanceThe data recovery performance was compared between the basic EC-based HDFS and disk I/O load distribution and random block placement and the matrix recycle-applied EC based HDFS, which was proposed in this study.The I/O load distribution tech- Figure 21 . Figure 21.Difference in storage time according to file size. Figure 21 . Figure 21.Difference in storage time according to file size. Figure 22 . Figure 22.Comparison of recovery performance according to the number of recovery threads.Figure 22.Comparison of recovery performance according to the number of recovery threads. Figure 22 . 24 Figure 23 . Figure 22.Comparison of recovery performance according to the number of recovery threads.Figure 22.Comparison of recovery performance according to the number of recovery threads.Appl.Sci.2021, 11, x FOR PEER REVIEW 20 of 24 Figure 23 . Figure 23.Disk usage of the EC HDFS (left) and the EC HDFS-LR (right). Figure 23 . Figure 23.Disk usage of the EC HDFS (left) and the EC HDFS-LR (right). Figure 24 . Figure 24.Performance comparison according to the block placement technique between the EC HDFS and the EC HDFS-LR. Figure 25 Figure25shows the memory usage when single and multiple faults occur in 2 + 2 EC, 4 + 2 EC, and 8 + 2 EC, while the matrix recycle technique is applied and the encoding and decoding word size is set to 32 bytes.Here, single fault means that a fault occurs in only one data cell, and multiple faults mean that faults occur in two data cells simultaneously. Figure 24 . Figure 24.Performance comparison according to the block placement technique between the EC HDFS and the EC HDFS-LR. Figure 25 24 Figure 25 . Figure 25 shows the memory usage when single and multiple faults occur in 2 + 2 EC, 4 + 2 EC, and 8 + 2 EC, while the matrix recycle technique is applied and the encoding and decoding word size is set to 32 bytes.Here, single fault means that a fault occurs in only one data cell, and multiple faults mean that faults occur in two data cells simultaneously.Appl.Sci.2021, 11, x FOR PEER REVIEW 21 of 24 Figure 25 . Figure 25.Memory size (KB) needed for matrix storage. Figure 25 . Figure 25.Memory size (KB) needed for matrix storage. Figure 26 . Figure 26.Performance comparison according to the matrix recycle. Figure 26 . Figure 26.Performance comparison according to the matrix recycle. Table 1 . The 128 KB I/O process size in the DataNode according to the EC setting. Table 1 . The 128 KB I/O process size in the DataNode according to the EC setting. Table 2 . Combining process scale for 16 128 KB-sized requests in the 16 + 2 EC volume.
v3-fos-license
2021-08-03T00:04:38.878Z
2021-03-01T00:00:00.000
237289961
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ujvas.com.ua/index.php/journal/article/download/77/99", "pdf_hash": "36797d3401a475b0a21a6f8adcfa24c5399bb726", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1285", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "sha1": "410e7815c60804785f7aa81854d558b05f154794", "year": 2021 }
pes2o/s2orc
Use of plant-derived drugs in the prevention and treatment of dairy cow mastitis Dairy cow mastitis is one of most seriously diseases affecting dairy herds. The prevention and treatment of this pathology is especially done through antimicrobials, but the increasing antimicrobial resistance of pathogens to this disease may affect the efficiency of conventional drugs. Plant extracts are increasingly being valued by livestock producers because of their wide sources, low toxic and side effects, and high environmental affinity. Thereby, a lot of research has been conducted on the control of dairy cow mastitis by plant-derived drugs in recent years. This review summarizes the current of the plant types, main active ingredients, and the mechanism of action of plant extracts for preventing and treating dairy cow mastitis. Finally, a review was carried out to prospect the future development of plant extracts in the treatment of dairy cow mastitis. Introduction Mastitis is the most important disease in dairy farming. It is not only affects the quality of milk, but also brings serious economic losses to the dairy industry. Dairy cow mastitis is the inflammatory response of the mammary gland caused by the physical, chemical or biological stimulation of the cow mammary gland after and the body interacts with the environment and pathogenic microorganisms (Seegers et al., 2003). With the continuous improvement of people's living standards, milk, as the best source of human calcium, has become an indispensable drink every day, making the dairy farming industry in our country have developed rapidly. Although scholars from all over the world have conducted unremitting research on the prevention and treatment of mastitis for more than 100 years, so far there has not been a method that can effectively prevent mastitis in cows (Gomes & Henriques, 2016). At present, there are multiple treatments for dairy cow mastitis, including antibiotic treatment, traditional Chinese medicine treatment, biological treatment, laser treatment, etc. Among them, antibiotics are the main means of controlling mastitis in dairy cows. However, after antibiotic treatment, a large amount of antibiotics and toxins will remain in the milk, which will cause pathogens to mutate and reduce the efficacy, but also cause harm to human health, and gradually become a bottleneck restricting the prevention and treatment of cow mastitis (Swinkels et al., 2015). As a natural substance, plant-derived drugs contain a variety of effective biologically active ingredients. It has the characteristics of antibacterial, anti-inflammatory, anti-drug resistance, low toxicity and low residue (Tamiris et al., 2020), and has the dual functions of medicine and nutrition. In recent years, experts at home and abroad have conducted a lot of research on the use of plant-derived drugs to treat dairy cow mastitis, playing an irreplaceable role in preventing and treating dairy cow mastitis, achieving green farming, and producing animal products and related products without drug residues (Jin-Lun et al., 2017). It can be seen from the examination and approval of preparations for cow mastitis in China that plant-derived pharmaceutical products have a place (Lai et al., 2017). The author summarizes the action mechanism of the plant-derived plants, main active ingredients, and plant extracts for preventing and treating cow mastitis in recent years, and prospects for the future development of traditional Chinese medicine treatment of cow mastitis. Results and discussion 1. Types of plant-derived medicines and their chemical components. Published literature shows that more than 30 kinds of plant extracts have an inhibitory effect on dairy cow mastitis in vivo or in vitro, such as, musk, stevia, scutellaria, rhubarb, astragalus, dandelion, honeysuckle, forsythia, licorice and so on. Many active ingredients in plants have strong bactericidal effects. For example, phenolic acids, alkaloids, flavonoids, terpenes, volatile oils and other drugs. They can directly or indirectly inhibit pathogenic bacteria killing and exert anti-inflammatory effects. Finding new anti-pathogenic bacteria ingredients from plant extracts is great significance to scientific research. 1.1. Phenolic acids. The phenolic acid extracts of various plants have protective effects on different types of dairy cow mastitis, which mainly include chlorogenic acid, caffeic acid, tea polyphenol compounds and so on. Ruifeng G. et al. (2014) have reported that anti-inflammatory effects of chlorogenic acid against LPS-induced mastitis may be due to its ability to inhibit TLR4-mediated NF-kB signaling pathway. CGA significantly reduced TNF-α, IL-1β and IL-6 production compared with LPS group. Liu M. et al. (2014) showed that the protective effect of caffeic acid on LPSinduced inflammation injury in bMEC was at least partly achieved by the decreased by the effect of reducing the kB inhibitor α degradation and p65 phosphorylation in the NF-kB pathway. The use of caffeic acid would be beneficial in dairy cows during Escherichia coli mastitis as a safe and natural anti-inflammatory drug. Total phenol extract from Clerodendranthus spicatus could effectively scavenge DPPH free radicals, reduced the production of NO and TNF-α in RAW264.7 cells induced by LPS, down-regulated the expression of IL-1β and IL-2, up-regulated the expression of IL-10, and increased cell viability of the breast epithelium under oxidative stress (Wang et al., 2015). Alkaloids. Alkaloids plays an important role in the treatment of many chronic diseases and exhibits strong antibacterial and anti-inflammatory activity. A study reported that, chelerythrine isolated from root of Toddalia asiatica (Linn) Lam possesses antibacterial activities through destruction of bacterial cell wall and cell membrance and inhibition of protein biosynthesis. Chelerythrine showed strong antibacterial activities against Gram-positive bacteria, Staphylococcus aureus (SA), Methicillin-resistant S. aureus (MRSA), and extended spectrum β-lactamase S. aureus (ESBLs-SA) (He et al., 2018). Lai J. et al. (2017) Found that indirubin can inhibit the expression of TLR4 in a dosedependent manner, and play a therapeutic role in LPSinduced MMECs inflammation and mouse mastitis. Staphylococcus epidermidis (S. epidermidis) is an opportunistic pathogen with low pathogenicity and a cause of the repeated outbreak of bovine mastitis in veterinary clinical settings. Li X et al. (2016) suggested that total alkaloids of Sophora alopecuroides has an inhibitory effect on biofilm formation of clinic S. epidermidis, which may be a potential agent warranted for further study on the treatment prevention of infection related to S. epidermidis in bovine mastitis. Flavonoids. It has been reported that flavonoids possess a number of biological properties, such as antiinflammatory, anti-virus, anti-bacteria, anti-tumor, and immunosuppressive properties. Astragalin, a main flavonoid component isolated from Chinese herbs, which has several medical functions including treating allergy, antiatopic dermatitis, and anti-inflammatory effects. Li F. Y. et al. (2014) showed that astragalin suppressed the expression of TNF-α, IL-6 and NO in a does-dependent manner in mouse mammary epithelial cells (mMECs), the expression of inducible nitric oxide synthase and cyclooxygenase-2 was also inhibited. Besides, astragalin efficiently decreased LPSinduced TLR4 expression, NF-κB activation, IκBα degradation, and the phosphorylation of p38, extracellular signalregulated kinase in BMECs. It may be a potential therapeutic agent for bovine mastitis. Baicalin, one of the major flavonoids in Scutellaria baicalensis, has natural antioxidant and anti-inflammatory properties in various cell types. Baicalin exerts protective antioxidant effects on bovine mammary cells, which suggests that it could be used to prevent oxidative metabolic disorders in dairy cows (Perruchot et al., 2019). Emodin is an anthraquinone derivative from the Chinese herb Radix et Rhizoma Rhei. Emodin has protective effect against lipopolysaccharide (LPS)-induced mastitis in a mouse model by reduced MPO, IL-6, IL-1 β and TNF-α. It acts on mastitis through the NF-KB pathway like other flavonoids . Terpenoids. Geniposide is a medicine isolated from Gardenia jasminoides Ellis. Song X. et al. (2014) use a lipopolysaccharide (LPS)-induced mouse mastitis model and LPS-stimulated primary mouse mammary epithelial cells (mMECs) to explore the anti-inflammatory effect and the mechanism of action of geniposide. The results showed that geniposide significantly reduced the infiltration of inflammatory cells and downregulated the production of TNF-α, IL-1β, and IL-6. Then, geniposide exerted its antiinflammatory effect by regulating TLR4 expression, which affected the downstream NF-κB and mitogen-activated protein kinase (MAPK) signaling pathways. Stevioside is isolated from Stevia rebaudiana,which reduced the expression of TNF-α, IL-1β, IL-6 and TLR2 by inhibiting the phosphorylation of proteins in the NF-κB and MAPK signaling pathways dose-dependently in the S. aureus-infected mouse mammary gland and mouse mammary epithelial cells (MMECs), as well as caspase-3 and Bax . 2. The mechanism of plant-derived medicines on dairy cow mastitis. 2.1. Preventive and therapeutic effects. Dairy cow mastitis has a huge impact on the dairy industry. The key to its treatment is prevention. Its preventive measures mainly include scientific feeding management, excellent hygienic conditions, scientific milking methods, nipple medicine baths, and vaccine prevention. The studies found that mammary gland epithelial cells will repair themselves by finetuning death caused by pathogenic bacteria and other factors. According to further research, cell apoptosis is the main method in the early onset of mastitis. When mastitis is aggravated, cell necrosis is the main method. Therefore, in order to avoid the occurrence and development of inflammation, the body increases the apoptosis of epithelial cells in the inflammatory reaction of breast, which is a selfprotection mechanism to protect the integrity of the breast to the maximum extent. Chen G. et al. (2002) found that astragalus polysaccharide (APS) has the effect of inducing apoptosis of tumor cells, which can reduced the number of cells in S phase and increased the number of cells in G0-G1 and G2-M phases. The increased of polysaccharide dose stayed in the G2-M phase, indicating that inducing apoptosis of tumor cells is an anti-tumor way of astragalus polysaccharides. Zhong K. et al. (2007) studied the effect of astragalus polysaccharides on E. coli endotoxin (LPS)induced experimental mastitis in goats, cows and rats. The results showed that local infusion of APS in the breast or feeding animals can alleviate the effect of LPS on animal breast tissue. Briefly, the destruction has a certain protective effect on animal breast tissue. The traditional Chinese medicine and prescriptions for the treatment of mastitis were: synthetic Houttuynia cordata, propolis mixture, Xianfang Huoming Yin, Gongying Shanjia Tang, Erhua Zaozi Yin, Ruyan San, etc (Wang et al., 2013). Studies by Zhang Z. (2009) have confirmed that the use of plant-derived drugs with honeysuckle and dandelion as the main components to treat diseased cows can play a good therapeutic effect, and its effect is significantly better than cefazolin sodium. Geng M. et al. (2006) have shown that the extract of the plant-derived medicine Ulmus pumila is better than cefalexin in the treatment of dairy cow mastitis. Zhang M. et al. (2001) selected Chinese herbal medicines such as angelica, chuanxiong, astragalus, dandelion, salvia miltiorrhiza, motherwort to feed dairy cows with latent mastitis, and detected the lymphocyte stimulation index (SI) and neutrophil phagocytosis of dairy cows. The results showed that the additive directly strengthens the phagocytic power of neutrophils and stimulates the proliferation of lymphocytes. Through the action of antibodies and complements, the phagocytic power of neutrophils is further strengthened. It had obvious therapeutic effect. Therefore, Chinese traditional plant-derived drugs and their active ingredients have great advantages and potential in clinical treatment of cow mastitis. Inhibition of pathogenic bacteria. According to the reports, there are more than 130 pathogenic microorganisms that can cause mastitis in dairy cows, and even more than 20 kinds are common. The pathogens with the highest detection rate are Staphylococcus aureus, Streptococcus and Escherichia coli. Mastitis caused by a variety of pathogens can account for 90 %. Therefore, the antibacterial activity of plant-derived drugs is the important indicator of their effectiveness (Rebhun, 2003). Luan Y. et al. (2005) used eight Chinese herbal medicines, including Daqingye and Coptis, to detected their resistance to β-lactamase-producing E. coli. Inhibition screening study found that Scutellaria baicalensis had the most obvious inhibitory effect, followed by Coptis and Daqingye. Liu P. et al. (2006) used five Chinese herbal medicines to test the drug resistance inhibition screening of drug resistant strains producing extended-spectrum βlactamase and sustained high yield AmpC enzyme, such as Forsythia suspensa, Senecio, Scutellaria baicalensis, etc. As a result, it was found that 5 kinds of traditional Chinese medicines inhibited the production of extended-spectrum β-lactamase and AmpC enzyme strains to varying degrees. Among them, the effect of Scutellaria baicalensis was more obvious, followed by Coptis chinensis and Senecio. Honeysuckle is known as "Chinese medicine penicillin" and has inhibitory effects on a variety of bacteria, including S. aureus, E. coli, Vibrio cholerae, and hemolytic streptococcus, etc (Song et al., 2003). In vitro antibacterial experiments of 10 Chinese herbal medicines showed that, the antibacterial effects of Chinese herbal medicines on E. coli were: Myrobalan, Viola and Houttuynia cordata, which were moderately sensitive; Prunella vulgaris, Scutellaria, Senecio, Astragalus, Gorgon, Teasel were followed by low sensitivity. The antibacterial effect on S. aureus were: Myrobalan, Scutellaria, Pomegranate Peel and Rhubarb had the best effects, which were the highly sensitive; Forsythia, Chuanxiong and Shegan were second, and were the moderately sensitive; Prunella vulgaris, Xanthium grass, Fried gardenia, and Rhubarb were low sensitivity. The antibacterial effect of Streptococcus agalactiae were: Astragalus and pomegranate peel were the highly sensitive; Ligusticum chuanxiong, Shegan, Phellodendron amurense and Houttuynia cordata were moderately sensitive; Forsythia, Myrobalan, Radix Scutellariae, Viola Ding, Rhubarb and Wangbu Staying were low sensitivity (Luo et al., 2002). In summary, Chinese herbal medicine has a good inhibitory effect on E. coli, S. aureus and Streptococcus agalactiae. Mechanism of action on inflammation. In recent years, the role of non-professional immune cells such as dairy cow mammary epithelial cells in resisting pathogens from invading the cow's mammary gland has received attention. When pathogenic bacteria invaded the mammary gland of dairy cows, epithelial cells would first synthesize and secrete a variety of immunologically active substances to resisted the infection of pathogenic bacteria and reduced or even relieved the inflammatory response. After pathogenic microorganisms were infected, its lipoteichoic acid, peptidoglycan and lipopolysaccharide could trigger the natural immune system of the mammary gland, activated intracellular signal transduction pathways such as NF-kB, MAPKs and JAK/STAT, and finally led to chemokines and inflammation factor release. Most scholars use pathogenic microorganisms or their products to stimulate breast tissue or breast epithelial cells, established in vivo and in vitro models of mastitis, and use Chinese herbal medicine or its main active ingredients to detect inflammatory factors such as IL-1β, IL-6 and TNF-α, TLRs, NF-kB, MAPKs and JAK/STAT signal pathway changes. Studies have shown that chlorogenic acid (Ruifeng et al., 2014) and caffeic acid (Liu et al., 2014) in honeysuckle and dandelion, emodin in rhubarb , thymol in musk (Wei et al., 2014), indirubin in Indigo Naturalis (Lai et al., 2017), and berberine hydrochloride in Coptis (Ye, 2007), Astragalus glycosides in Astragalus vulgaris , Geniposide in Gardenia , flavonoids baicalin (Perruchot et al., 2019), Kidney tea total phenols (Wang et al., 2015), Dandelion sterols (San, 2014), Astragalus polysaccharides (Perruchot et al., 2019), all of them could inhibited NF-kB, MAPKs and JAK/STAT pathways, reduced the expression level of inflammatory factors and played a protective effect on breast cells or animals. 2.4. Improve immune function. The mammary glands of dairy cows contained the necessary components for immune response to invading pathogenic microorganisms. When the low content of immunoglobulin and complement in breast secretions and the existence of certain inhibitory factors, the immune function of the breast was suppressed (Hu, 1997). According to the reports, a traditional Chinese medicine consisting of astragalus, angelica, salvia, dandelion, etc. can increased animal antibody production and promoted lymphocyte transformation (Ma, 1986). Zhang Y. et al. (2001) added Chinese herbal medicine additived to the diets of normal dairy cows at the early stage of lactation, and the results showed that the addition of Chinese herbal medicines in the early stage of lactation can significantly increased the milk production of dairy cows by 7.4 (P < 0.01), it also improved the milk composition, and increased the milk fat rate by 11.7 (P < 0.05). That means the Chinese herbal medicine can reduced the incidence of nonclinical mastitis, and enhanced the immunity of dairy cattle. Another study has found that Astragalus polysaccharides can enhanced the ability of phagocytes, activated macrophages, promoted cell differentiation and the secretion of IL-2, thereby enhancing the ability of macrophages to kill bacteria and disease, and enhance the immune system of dairy cows. It could stimulate the release of cytokines, affect the neuroendocrine-immune system (Liu et al., 2011). Conclusions The use of Chinese herbal medicine to prevent and treat dairy cow mastitis has the advantages of no drug residues and high economic benefits, and has broad application prospects in the dairy industry. Although the prevention and treatment of mastitis by traditional Chinese medicine was indeed effective, the current research and development efforts were not strong. It was manifested that there were few varieties of traditional Chinese medicine preparations for the prevention and treatment of mastitis. Most of them were powders and decoctions. The effective ingredients, content and structure have not been researched clearly, which restricts its wide application in clinic and application effect. Further research is needed on the method of separating and extracting the active ingredients of traditional Chinese medicine, the efficacy and mechanism of traditional Chinese medicine. On this basis, we will develop efficient, safe and stable Chinese medicine preparations so that Chinese medicine can play a greater role in the prevention and treatment of cow mastitis.
v3-fos-license
2021-08-03T00:04:17.415Z
2021-03-26T00:00:00.000
236749968
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.005608.pdf", "pdf_hash": "bff0e1de238fdc1e614d37c40bbaf6f1a01699b8", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1287", "s2fieldsofstudy": [ "Business" ], "sha1": "9256000635f7cef64240c907fb803d898ab03ba5", "year": 2021 }
pes2o/s2orc
Effects of COVID-19 on Organizational Psychology in Management and Strategic Context This study aims to identify the differences of businesses born in the period of COVID-19 from other companies operating in similar sectors in terms of organizational psychology and to determine the basic dimensions of organizational psychology and the factors of these dimensions. The study was conducted with a total of 48 businesses of the restaurant, cafe, retail, and virtual store established before March 2020 in Turkey, when the first cases of the COVID-19 emerge, and the businesses born after the pandemic operating in Adana in these sectors. According to the findings of the study, three basic dimensions that highlight organizational psychology in coding were determined: Internalization and perception in psychology, psychology management, reactivity. According to the findings obtained from the participants, the management factors affecting the organizational psychology of the enterprises established before the COVID-19 period were determined as employee and human resources management, innovation and entrepreneurship, intra-organizational and corporate compliance. COVID-19 on Organizational Psychology in Management and Strategic Context. Introduction COVID-19 continues to reveal the most common harmful effects of today's world. These effects give a negative momentum to the psychology of individuals [1]. Among these effects is the inability of elderly people to leave their homes and the inability of children and young people to live freely. Urban life offers a limited living space under the influence of the pandemic [2]. Developing technology and digital possibilities are not enough to reduce the impact of the pandemic [3]. Despite the fact that daily vital activities are limited in the light of these developments, inter-human interaction continues at a minimal level in order to meet biological needs. While this situation slows down the effects of the pandemic, it cannot prevent mutation [4]. Human behavior and psychology have important consequences under the oppressive, social isolation effect of the pandemic [5]. One of these results is organizational psychology. Organizational psychology is the behavioral patterns and direction of organizational functions that mimic human psychology [6]. Due to the nature of human psychology, its differentiation according to time and conditions affects organizational functions and behaviors. Managerially, the decisions of leaders in the same direction with their psychology affect the functional outputs of the organization. The fact that the pandemic period creates extraordinary situations affects organizational psychology in terms of administration, so that inter-organizational and social conflicts can be experienced. In this context, this study reveals the differences of businesses born in the period of COVID-19 from other companies operating in similar sectors in terms of organizational psychology. In addition, the study aims to determine the basic dimensions of organizational psychology and the factors of these dimensions. Sample The study was conducted with a total of 48 businesses of the restaurant, cafe, retail, and virtual store established before March 2020 in Turkey, when the first cases of the COVID-19 emerge, and the businesses born after the pandemic operating in Adana in these sectors. The study was applied in January 2021. Human resources, manufacturing and planning, after-sales services, and executive personnel of each business were included in the study. Data Collection and Analysis One of the qualitative research methods, the phenomenology method was adopted in the study [7]. After the research questions were determined, a face-to-face interview was conducted with the participants included in the study. Participants were informed that participation was voluntary and that they could end answering research questions at any time. The research questions applied to the participants are specified in Table 1. The questions in Table 1 were obtained as a result of an extensive literature review. After the dimensions and subdimensions were obtained, the questions were directed according to the contents in the table. The questions asked to differ according to the sector, organizational structure, turnover, and employee characteristics of each enterprise. Therefore, the unstructured interview technique was applied in the study [8]. The obtained data were recorded with traditional methods and an indepth investigation of experiences and knowledge in organizational psychology was carried out through interpretive phenomenology [9]. Field description, classification, and component analysis were applied respectively in data analysis. Categories were created by coding the data [10][11][12][13][14][15][16][17][18]. Thus, the subject titles and contents obtained by the classification were subjected to validity and reliability analysis [19]. Can you tell us the words that come to mind when you talk about business psychology? 10 What do you think are the main differences between the concept of organizational psychology and human psychology? Demographic Characteristics Demographic findings are shown in Table 2. According to the research findings, 39.79% of the participants were women and 60.21% were men. 85.72% of the participants were between the ages of 19-40. In addition, when the demographics of the participants were examined, the restaurant was 22.91%, cafe 25.00%, retail 20.83%, and virtual store 58.74%. It was emphasized that the organizational/business climate was important for the team and group work outcomes of the employees to contribute to the organizational psychology as a performance output. It was emphasized that periodic/sustainable training on the negative psychological states should be given to managers and employees on these issues. In organizational psychology, after the internalization (perceptual) and managerial processes of organizational psychology, findings revealed the dimension of reactivity. Participants explained that organizational psychology was related to job satisfaction and commitment, productive behavior, job performance, organizational citizenship behavior, innovation, entrepreneurship, and rivalry power. The participants stated that the innovative and entrepreneurial behavior of the organization was a psychological reaction, and this response was necessary for the sustainability of the organization. It was found that concepts such as commitment, adoption, dependency, and satisfaction came to the fore in the coding of the contents of the findings obtained in revealing this dimension. Table 4. Discussion and Conclusion The findings of the study reveal the existence of three dimensions of organizational psychology based on perception, management, and reactivity. These findings have not been determined before in the literature. In the literature, organizational psychology has been the subject of research on group performance, productivity, job-related attitudes, culture, organizational identification, group norms, jobrelated behaviors [20][21][22][23][24][25][26]. However, these studies generally focus Therefore, digital elements are indispensable infrastructure elements in the production and service sectors. These infrastructural elements are necessary for improving organizational psychology. The study is limited to restaurant, café, retail, and virtual store businesses operating in Adana Turkey. Therefore, the sample limitation of the study affects the generalizability of the results. It is recommended that the study be carried out in different sectors and regions. In addition, future research on enterprises operating in the manufacturing sector on the subject of study will contribute to the organizational psychology literature.
v3-fos-license
2017-10-30T23:19:28.051Z
2013-03-26T00:00:00.000
1400282
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0059794&type=printable", "pdf_hash": "da9ede2fb73ef5f0352ba7fa52cdb4cf45137845", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1289", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "da9ede2fb73ef5f0352ba7fa52cdb4cf45137845", "year": 2013 }
pes2o/s2orc
Berberine Ameliorates Chronic Kidney Injury Caused by Atherosclerotic Renovascular Disease through the Suppression of NFκB Signaling Pathway in Rats Background and objectives Impaired renal function in atherosclerotic renovascular disease (ARD) may be the result of crosstalk between atherosclerotic renovascular stenosis and amplified oxidative stress, inflammation and fibrosis. Berberine (BBR) regulates cholesterol metabolism and exerts antioxidant effects. Accordingly, we hypothesized that BBR treatment may ameliorate ARD-induced kidney injury through its cholesterol-lowering effect and also suppression of the pathways involved in oxidative stress, inflammation and NFκB activation. Methods Male rats were subjected to unilateral renal artery stenosis with silver-irritant coil, and then fed with 12-week hypercholesterolemic diet. Rats with renal artery stenosis were randomly assigned to two groups (n = 6 each) – ARD, or ARD+BBR – according to diet alone or in combination with BBR. Similarly, age-matched rats underwent sham operation and were also fed with hypercholesterolemic diet alone or in combination with BBR as two corresponding controls. Single-kidney hemodynamic metrics were measured in vivo with Doppler ultrasound to determine renal artery flow. The metrics reflecting hyperlipidemia, oxidative stress, renal structure and function, inflammation and NFκB activation were measured, respectively. Results Compared with control rats, ARD rats had a significant increase in urinary albumin, plasma cholesterol, LDL and thiobarbituric acid reactive substances (TBARS) and a significant decrease in SOD activity. When exposed to 12-week BBR, ARD rats had significantly lower levels in blood pressure, LDL, urinary albumin, and TBARS. In addition, there were significantly lower expression levels of iNOS and TGF-β in the ARD+BBR group than in the ARD group, with attenuated NFκB-DNA binding activity and down-regulated protein levels of subunits p65 and p50 as well as IKKβ. Conclusions We conclude that BBR can improve hypercholesterolemia and redox status in the kidney, eventually ameliorating chronic renal injury in rats with ARD, and that BBR can act against proinflammatory and profibrotic responses through suppression of the NFκB signaling pathway. Introduction Chronic kidney injury caused by renovascular diseases would be increased over time in patients with end-stage renal disease (ESRD) [1]. Renal artery stenosis, most commonly due to atherosclerotic plaques and atherosclerosis, is an important clinical entity that can lead to hypertension and progressive renal damage [1][2]. In addition to threatening renal function, atherosclerotic renovascular disease (ARD) with renal failure poses a risk for exacerbation of cardiovascular disease and predicts cardiovascular mortality [3][4]. Hence, the mechanisms responsible for renal damage in this disease are being vigorously sought, and effective therapeutic strategies to preserving the kidney are under intense investigation. Berberine (BBR), a kind of isoquinoline alkaloid with multiple pharmacological actions, has been widely used as a therapeutic agent indicated for tumor and microbial infection in China and other East Asian countries [5]. In addition, there have been reports showing its potential to treat diabetes and cardiovascular disease [5][6][7]. Evidence has demonstrated that BBR could effectively regulate cholesterol metabolism, inhibit cell prolifera-tion and act against oxidative stress properties [5][6][7][8][9]. In this report, we investigated whether BBR could protect against chronic renal injury in the rat models with hyperlipidemia and unilateral renal artery stenosis. Animals and experimental design Normotensive male Wistar rats, weighing 200-220 g, were provided by the Experimental Animal Center affiliated to Nanjing Medical University, Jiangsu, China. Prior to the initiation of experimental protocols, rats were housed with free access to tap water and food for over 1 week. All procedures were performed under sterile conditions in accordance to the guidelines set by the Institutional Animal Care and Use Committee, Nanjing First Hospital, Nanjing Medical University, and the local law on animal care and protection. In order to make rat model with hyperlipidemia and unilateral renal artery stenosis as required, rats were anesthetized with pentobarbital sodium (40 mg/kg, ip). After that, via a flank incision, a 0.3 mm-diameter silver-irritant coil was placed in the left renal artery at baseline to chronically reduce perfusion pressure, and the right nephrectomy was performed [10][11]. Rats were fed with a 12-week hypercholesterolemic diet of 2% cholesterol and 15% lard diet [10][11][12]. Single-kidney hemodynamic measures were undertaken in vivo with Doppler probes (Visual Sonics Vevo 2100 MS-250, Canada) to determine left renal artery blood flow. Rats with renal artery stenosis were randomly assigned to two groups (n = 6 each) -ARD rats with or without administration of BBR (Sigma, St Louis, MO, USA) 150 mg/kg per day by gastric gavage, followed by a 12-week hypercholesterolemic diet to feed [5][6]. Similarly, age-matched rats underwent sham operation and were fed with normal diet in the absence or presence of BBR as two corresponding control (CTL) groups -CTL or CTL+BBR. Body weight and food intake were recorded daily during the experimental period. At the end of 12-week treatment period, rats were placed in individual metabolic cages for a 24-hour urine collection and measurement of food and water intake. They were then anesthetized with pentobarbital sodium (40 mg/kg, ip) and euthanized by exsanguination using cardiac puncture. Blood samples drawn from the inferior vena before PBS perfusion were centrifuged at 3,000 g for 5 min, and their supernatants were stored with the equal volume at 280uC. Urine was collected for measurement of urinary albumin (uALB). After the kidney was harvested, a specimen was fixed, in turn, with 10% formalin and 4% paraformaldehyde, another small fraction was immediately fixed with 4% glutaraldehyde for electron microscopy, and the remaining was cleaned with PBS, snap-frozen in liquid nitrogen, and stored at 280uC until processed [13]. Measurement of systolic blood pressure Blood pressure was noninvasively measured by a volume pressure recording sensor and an occlusion tail-cuff with Powerlab/8SP data acquisition system (AD Instruments, AD Instrument, Castle Hill, Australia) as described elsewhere [14]. In brief, rats were trained by placing them in restraints, 1 hour daily, for 7 days before the experiments. Upon completion of the training, conscious rats were restrained, gently warmed using a heating lamp, and took a rest for 10-15 min. The cuff was then placed around the tail, inflated and released several times. After stabilization, systolic blood pressure (SBP) was recorded every week, three times at a time, and an average of recorded values was calculated. Measurement of reactive oxygen species (ROS) and antioxidant capacity Chemiluminescence (CL) was used to measure ROS as described elsewhere [10,12]. Briefly, renal tissues were incubated in 0.9% NaCl solution containing 10 mM phosphate buffer (pH 7.4), 6 mM KCl and 6 mM MgCl 2 with the CL probe (Jiangcheng Chemical Company, Nanjing, China). During the incubation period, CL intensity was recorded continuously for 30 min using a Luminescence Reader apparatus [10]. Thyobarbituric acid reactive substances (TBARS), superoxide dismutase (SOD), and catalase (CAT) in kidney cortex were measured, respectively, using commercially available kits (Jiangcheng Chemical Company, Nanjing, China) according to the manufacturer's instructions [12]. Histological examination Formalin-fixed tissue fractions (stained with hematoxylin and eosin and Masson trichrome) were evaluated and scored with light microscopy by the same well-trained staff blind to the study assignment. In brief, the degree of tubulointerstitial injury, defined as tubular dilatation and/or atrophy, cell infiltrate or cellular edema, was estimated semi-quantitatively according to the prespecified following criteria: grade 0: normal kidney; grade 1: damaged up to 25% of the cortex; grade 2: damaged 26 to 50% of the cortex; and grade 3: extensive damage of.50% of the cortex [14][15]. Interstitial fibrosis was measured by the presence of interstitial collagen in sections stained with Masson trichrome. Immunohistochemical staining Immunostaining was processed in 3 mm paraffinized sections. Slides were dewaxed, and sections were washed three times for 5 min each in PBST (PBS, pH 7.4, 0.05% Tween 20). Then, the microwave antigen retrieval procedure (citrate buffer, pH 6.0) was performed. Rabbit-anti-rat iNOS or TGF-b (1:100, Santa Cruz Biotechnology, CA, USA) was used as primary antibodies and incubated overnight at 4uC. Nonspecific binding sites were blocked with 4% goat serum diluted 1:10 in PBST. The primary antibody was detected by horseradish peroxidase (HRP)-conjugated anti-rabbit/anti-rat secondary antibody (Keygen, Nanjing, China), and developed with DAB chemical kit (Zhongshan Goldbridge, Nanjing, China). Nuclei were counterstained with hematoxylin. All slides were prepared in duplicate, one served as a control for secondary antibody binding specificity. The positive areas were measured with five randomly chosen fields [12][13]15]. Electromobility shift assay (EMSA) Nuclear proteins from kidney cortical tissues were prepared as described elsewhere [5]. Preparation for EMSA was performed with a gel shift assay kit E3300 according to the manufacturer's instructions (Promega, WI, USA). Briefly, NFkB binding-site DNA was radiolabeled with c-32 P and the use of T4 polynucleotide kinase. Nuclear extracts (10 mg) were incubated in binding reaction medium with 0.5 ng of 32 P-end-labeled oligo, containing the NFkB binding-site DNA for 30 min at ambient temperature. In a competition assay, 50 ng of unlabeled NFkB binding-site DNA or scrambled DNA were used. The DNA-protein complexes were analyzed on 5% polyacrylamide gels, and autoradiographed. Band densitometrics was estimated on autographs and values were expressed as relative density unit (RDU) against control band [5,13,15]. Statistical analysis All data were expressed as the mean 6 SEM. Statistical analyses were performed using ANOVA test, followed by the Bonferroni correction for multiple comparisons. To compare NFkB activities, Wilcoxon rank-sum test was used. The differences were evaluated with SPSS 13.0 software (SPSS, Chicago, IL, USA). At least 3 independent experiments were performed. P,0.05 was considered statistically significant. Almost normal histological structure after BBR treatment in CTL+BBR rats. c) Tubulointerstitial inflammation and cells infiltration were evident in ARD rats. d) The renal histological morphology is almost normal after BBR treatment in ARD+BBR rats. B: Assessment of kidney tissue architecture, and interstitial and perivascular fibrosis by Masson's trichrome staining. a) Normal histological structure. b) Almost normal histological structure after BBR treatment in CTL+BBR rats. c) Compared with CTL rats, ARD rats had more tubular degeneration and ectasia, more interstitial, periglomerular, and periarterial fibrosis (light blue staining areas). d) The renal histological morphology was almost normal after BBR treatment in ARD+BBR rats. C: Quantification of Masson's trichrome for tubulointerstitial injury score. Data expressed as means 6 SEM, n = 6 per group. *P,0.05 vs. CTL group. # P,0.05 vs. ARD group. $ P,0.05 vs. CTL+BBR doi:10.1371/journal.pone.0059794.g002 Assessment of the ARD rat model As shown in Tables 1, 2, 3 and Fig. 1, ARD rats exhibited increased SBP, accelerated renal blood flow, increased uALB, plasma cholesterol, LDL-cholesterol and TBARS, and decreased SOD activity as compared with CTL rats, indicating that ARD rats were characterized by renovascular stenosis, hypercholesterolemia, increased oxidative stress, impaired antioxidant capability, and albuminuria. Moreover, ARD rats had evident histological and even ultrastructural alterations, such as marked tubuloiterstitial inflammation and cell infiltration, tubular degeneration and ectasia, more interstitialperiglomerular, and periarterial fibosis, increased proliferation of mesangial cell and matrix, more focal areas of foot process effacement as examined by light microscope, electronic microscope (EM), and Western blotting analyses, respectively, as shown in Figs 2, 3, 4. In addition, the ARD rats displayed strong straining for iNOS and TGF-b in their renal endothelium and tubulointerstitium as measured by immunohistochemical staining as shown in Fig. 5. These data suggest that the ARD rat model is successful as anticipated. Assessment of blood pressure and hemodynamics of renal artery Of the all 24 coiled ARD rats, 16 (66.7%) were alive until Doppler examination, and 12 underwent further examination. In contrast, CTL rats undergoing sham operation all survived. Doppler ultrasound velocity at left renal artery was detected at 300 mm/sec in CTL groups. However, blood flow acceleration was detected at 600 mm/sec in ARD groups, indicating a 60% increase in renal artery stenosis as shown in Fig. 1. Compared with CTL, ARD rats had a significantly increased SBP (P,0.05), regardless of administration of BBR as shown in Table 2. Effects of BBR on serum lipid profiling and renal safety biomarkers in the ARD rats Both uALB and serum creatinine (sCr), two of the eight qualified renal safety/functional biomarkers in rats, were assessed in this study. As shown in Table 1, ARD rats had a significant increase in cholesterol, LDL-cholesterol, and uALB as compared with CTL rats (P,0.05, respectively), and also had a marked increase in triglyceride and sCr, which failed to reach statistical significance. However, 12-week BBR treatment in ARD rats resulted in significantly decreased LDL-cholesterol and uALB (P,0.05, respectively), and markedly decreased cholesterol, triglyceride and sCr without reaching statistical significance after Bonferroni correction for the multiple comparisons (P.0.05). Effects of BBR on histological and ultrastructural alerations in the ARD rats HE-and Masson trichrome-staining of kidney tissue revealed that ARD kidney exhibited more severe structural damage than CTL kidney, characterized by sclerotic glomeruli, more extensive regions of focal interstitial fibrosis and tubular atrophy with lymphocytic infiltrates as shown in Fig. 2A-B. Morphological evaluation of tubular damage (Fig. 2C) supported these results. EM studies identified focal effacement of foot processes, proliferation of mesangial cells and matrix production, consistent with the findings assessed by light microscopy as shown in Fig. 3. Furthermore, compared with ARD rats, BBR-treated rats Effects of BBR on oxidative stress status in the kidney Oxidative stress biomarkers were assessed by measuring total ROS, SOD, TBARS and CAT in kidney tissues. There was a significant decrease in TBARS levels in response to 12-week BBR treatment in ADR rats (P,0.05) as shown in Table 3, implying decreased production of lipid peroxidation by BBR. A marked decrease in total ROS and CAT and a pronounced increase in SOD activity were seen with 12-week BBR treatment, but failed to reach statistical significance after Bonferroni correction for the multiple comparisons. These data indicated that BBR treatment reversed attenuated SOD activity, and that BBR could improve SOD-mediated superoxide anion scavenging. Effect of BBR on the expression of pro-inflammatory and fibrotic molecules The expression of pro-inflammatory molecule iNOS was markedly elevated in the glomerular and tubular compartments, and profibrotic factor TGF-b was mainly distributed in the glomerular, perivascular, and tubulointerstitium of ARD kidneys, compared with CTL rats. That increased expression was substantially inhibited in the BBR-treated rats (Fig. 4), indicating an attenuation in renal inflammation. Effect of BBR on NFkB signaling pathway NFkB-DNA binding activities were measured using EMSA. ARD rats had significantly increased NFkB activities as compared with CTL rats, and that increase was associated with increased expression of phosphorylated IKKb, NFkB subunit p65/p50 and was reversed by BBR treatment (Figs. 4c, 5). The above evidence suggested that the mechanism underlying the suppression of both intrarenal inflammation and tubulointerstitial injury by BBR is involved in the down-regulation of NFkB signaling pathway. Discussion In this study, the ARD rats exhibited renal dysfunction, increased renal oxidative stress and inflammation as well as tubulointerstitial fibrosis, along with impaired renal structure as compared with the CTL rats. The major findings were summarized as follows: 1) BBR treatment protected against worse renal function and structure in the ARD rats in addition to a significant reduction in hypercholesterolemia; 2) BBR suppressed renal oxidative stress and inflammation, improved renal antioxidant Emerging evidence has demonstrated that BBR has multiple beneficial effects, such as lipid-lowering, hypoglycemic, insulinsensitizing, and weight-lowering properties in diabetes and cardiovascular diseases [5,9,[16][17][18]. However, renoprotective effect of BBR and its molecular mechanisms in ARD and chronic kidney injury remains to be determined. Oxidative stress, as a result of the excessive production of free radical species, is one of the clinical characteristics in patients with hypertension and atherosclerosis [19][20]. Conversely, hypertension and atherosclerosis have been shown to cause oxidative stress in the kidney [21]. This self-perpetuating cycle can lead to progressive renal disease. Experimental blockade of the oxidative stress pathway with antioxidant vitamins in several disease models has been shown to decrease renal injury [19][20][21]. In the case of renal oxidative stress during atherosclerotic renovascular kidney injury, the predominant component of free radical is superoxide, which results in decreased SOD levels in the ARD rats. That highlights the point that oxidative stress caused by overproduction of oxidants and impaired antioxidant defense system occurred in the ARD rats may be responsible for observed renal damage in these rats. A previous study indicated that BBR possesses antioxidative effects via decreasing NADPH oxidase-dependent ROS production in vitro [22]. The results of our work also demonstrated that antioxidant activity and the scanverge rate of total ROS and superoxide were significantly higher in the ARD rats treated with BBR than in those without. These results suggest that BBR protects against oxidative renal damage by attenuating free radical production and preserving SOD and CAT activities, thereby improving BP, renal structure and function [23][24]. Proinflammatory cytokines involved in renal oxidative stress can activate the redox-sensitive transcription factor, NFkB, which, naturally occurring in its heterodimer state of the Rel protein family subunits p65/p50, can increase the production of ROS and reactive nitrogen species, such as superoxide and peroxynitrite [12,[25][26][27]. These ROS themselves can also increase NFkB activity, leading to further oxidative/nitrosative insult, which perpetuates this vicious positive feedback cycle and accelerates renal damage [28]. Once activated, NFkB heterodimers p65/p50 binds to NFkB binding sites on the promoter regions of its target genes to initiate the transcription and protein expression, such as iNOS and TGF-b, leading to significant inflammatory responses [29][30][31]. Activation of NFkB is dependent upon the activation of IKK2 (also known as IKKb) through phosphorylation of the IkB molecule that is the inhibitor of NFkB. The kinase activity of IKKb targets two adjacent serine residues of IkB, leading to its ubiquitination and proteasomal degradation, and release and activation of NFkB [31][32]. Many signaling pathways can activate NFkB converge at the level of IKKb. Examples of stimuli leading to IKKb and subsequent NFkB activation include inflammatory cytokines, endotoxins, viral infection, and ROS [30,31,[33][34][35]. In this study, we observed that impaired renal function in ARD rats is the result of increased oxidative stress, pro-inflammation and tubulointerstitial fibrosis, consistent with the results of earlier observations [11][12]. As the rate-limiting enzyme, iNOS catalyzes the synthesis of NO from the guanidino nitrogen of L-arginine, resulting in the excessive production of NO, which involves in glomerular mesangial expansion, capillarectasia and tubulointerstitial fibrosis during the chronic renal damage [30,31,[33][34][35]. In addition, TGF-b may mediate renal fibrotic injury through activation of the TGF-b/Smad pathway to facilitate extracellular matrix accumulation [29]. BBR treatment resulted in attenuated levels of intrarenal pro-inflammation and fibrosis molecules, suggesting that BBR protects against kidney injury through its anti-inflammation and anti-fibrotic effects. In this work, we observed for the first time that BBR can effectively inhibit NFkB activity and decrease the expression of phospho-IKKb, p65/p50, and their downstream elaboration of prototypical molecules iNOS and TGF-b in the ARD kidneys, suggesting that BBR ameliorates intrarenal inflammation and tubulointerstitial injury, at least in part, through the suppression of NFkB signaling pathway. In summary, we conclude that BBR intervention for the ARD rats can suppress proinflammatory and profibrosis responses, improve redox status in the kidney, lower hypercholesterolemia, and eventually ameliorate renal injury, and that such effects appear to be mediated by inhibition of the activity of the NFkB signaling pathway. Thus, BBR might play an important role in delaying progression of chronic kidney injury through preserving renal structure and function in patients with ARD. Author Contributions Conceived and designed the experiments: C-CC XW XC. Performed the experiments: LL G-GM W-JH. Analyzed the data: YZ. Contributed reagents/materials/analysis tools: QZ WC. Wrote the paper: C-CC H-GX.
v3-fos-license
2024-04-14T15:25:20.551Z
2024-04-01T00:00:00.000
269133039
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4360/16/8/1067/pdf?version=1712842905", "pdf_hash": "df44d73a16cb51abe901a35aab3e872a87b29b9d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1290", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "sha1": "b528ff478c8e511c06e1805c69bb8d21166d93d4", "year": 2024 }
pes2o/s2orc
Color and Chemical Stability of 3D-Printed and Thermoformed Polyurethane-Based Aligners The significant rise in the use of clear aligners for orthodontic treatment is attributed to their aesthetic appeal, enhancing patient appearance and self-confidence. The aim of this study is to evaluate the aligners’ response to common staining agents (coffee, black tea, Coca-Cola, and Red Bull) in color and chemical stability. Polyurethane-based thermoformed and 3D-printed aligners from four brands were exposed to common beverages to assess color change using a VITA Easyshade compact colorimeter after 24 h, 48 h, 72 h, and 7 days, as well as chemical stability using ATR-FTIR spectroscopy. The brand, beverage, and manufacturing method significantly influence color stability. ATR-FTIR analysis revealed compositional differences, with variations in response to beverage exposure affecting the integrity of polymer bonds. Color change analysis showed coffee as the most potent staining agent, particularly affecting Tera Harz TC85 aligners, while ClearCorrect aligners exhibited the least susceptibility. 3D-printed aligners showed a greater color change compared to thermoformed ones. Aligners with a PETG outer layer are more resistant to stains and chemical alterations than those made of polyurethane. Additionally, 3D-printed polyurethane aligners stain more than thermoformed ones. Therefore, PETG-layered aligners are a more reliable choice for maintaining the aesthetic integrity of aligners. Introduction The utilization of clear aligners in orthodontic treatment has significantly increased over the past two decades, driven by the growing demand for aesthetically pleasing orthodontic solutions [1][2][3].Aligners, lauded for their aesthetic superiority over conventional braces, have become a preferred choice for patients who seek an inconspicuous method for correcting malocclusions [4,5].The appeal of these aligners lies in their transparency, which allows patients to undergo orthodontic treatment without the stigma often associated with metal braces, thereby not only enhancing the patient's appearance during treatment but also boosting self-confidence [6]. The manufacturing of clear aligners can be broadly classified into thermoformed and 3D-printed techniques.Thermoforming involves heating a polymer sheet until it becomes pliable and then molding it over a dental model to form the aligner [7].Those are composed of thermoplastic resin polymers such as poly(vinyl chloride) (PVC), poly(ethylene terephthalate) (PET), poly(ethylene terephthalate glycol) (PETG), and thermoplastic polyurethane Polymers 2024, 16, 1067 2 of 18 (TPU).On the other hand, 3D printing, or additive manufacturing, builds the aligner layer by layer directly from a digital model [8][9][10].These methods employ polyurethanebased materials, chosen for their balance of clarity, strength, and flexibility, which are essential for the continuous force application required to move teeth into their desired positions [9,11].Polyurethane-based materials, in particular, have been identified for their superior performance in terms of mechanical properties and patient comfort [11,12]. A novel approach in 3D printing utilizes polyfunctional acrylic non-isocyanate hydroxyurethanes, offering an innovative route for creating photocurable thermoset resins suitable for applications like stereolithography.This method bypasses the need for isocyanates, known for their moisture sensitivity, enabling more resilient and adaptable dental aligners through photo cross-linking techniques [13].Additionally, research into thermosetting nonlinear optical polymers such as polyurethane showcases the potential for creating materials that undergo thermal curing post-electric field poling, indicating a route for enhancing the mechanical properties and long-term stability of aligners through controlled cross-linking [14].The cross-linking extent and polymerization techniques, especially in 3D-printed polyurethane aligners, focus on achieving a balance between printability before curing and robustness after final thermosetting.Techniques have been developed to enable the thermoset polymers' shaping without cross-linking or excessive fillers, leading to innovations in creating complex 3D structures with isotropic mechanical properties, thereby overcoming traditional limitations of thermosetting resins in 3D-printing processes [15].The chemistry underlying these advancements encompasses both the meticulous design of polymer networks and the strategic application of polymerization and cross-linking methods to yield dental aligners that are not only effective in treatment but also superior in material properties and comfort. The aspect of color stability in aligners is paramount, as any discoloration can significantly undermine their aesthetic value.Previous studies on thermoformed aligners have extensively explored their resistance to staining, attributing their color stability to material properties and manufacturing processes [16,17].Research has shown that thermoformed aligners retain their color when exposed to common dietary agents such as coffee, tea, and wine, primarily due to the surface characteristics and the chemical composition of the polyurethane material used [18]. The chemical composition of dental materials, such as aligners and restoratives, is significantly influenced by exposure to common beverages, leading to changes in their physical and optical properties [19].Acidic beverages can cause erosion, deteriorate material surfaces, and make them prone to staining and structural weakening [20].Staining agents like coffee and tea contain chromogenic compounds that adhere to or penetrate these materials, leading to discoloration [21].Additionally, water absorption from beverages contributes to the hydrolytic degradation of polymers, affecting their mechanical properties and aesthetic appeal [19].Beverages also contain additives, including acids and colorants, that can chemically interact with dental materials, further exacerbating degradation and staining [20].Both smoke and beverages can impact the color stability of dental appliances, but the presence of smoke specifically exacerbates discoloration and deterioration, posing a significant threat to both the aesthetic and functional integrity of the appliances [21].Moreover, the temperature of beverages can induce thermal expansion or contraction in these materials, increasing their susceptibility to damage and discoloration over time [22]. However, with the advent of 3D-printing technology in the fabrication of aligners, there is a compelling need to examine whether these aligners exhibit the same level of color stability as their thermoformed counterparts.Three-dimensional printing offers several advantages, including the ability to produce aligners with complex geometries and a customized fit, but its impact on the color stability of the final product remains underinvestigated [23].Given the different processing conditions and material formulations involved in 3D printing, it is critical to understand how these factors influence the optical properties of the aligners over time [24].A few preliminary studies have begun to explore this, suggesting that while 3D-printed aligners show promise in terms of fit and comfort, Polymers 2024, 16, 1067 3 of 18 their resistance to staining and color change under various environmental conditions warrants further investigation [22,25,26]. In light of this, the present study seeks to delve deeper into the color stability of 3D-printed aligners, comparing their performance against the well-documented color stability of thermoformed aligners.By evaluating the aligners' response to common staining agents, this research aims to offer comprehensive insights into the long-term aesthetic durability of 3D-printed aligners, thus filling a significant gap in orthodontic research and potentially guiding future material and process development for aligner fabrication. Sample Preparation Four polyurethane-based aligners from two brands of thermoformed aligners, ClearCorrect (Straumann, Basel, Switzerland) and Invisalign (Align Technology, San Jose, CA, USA), as well as two brands of 3D-printed aligners, Tera Harz TC-85 resin (Graphy, Seoul, Republic of Korea) and Clear-A (Senertek, ˙Izmir, Turkey), were used in the study, one for each beverage (Coca-Cola, black coffee, black tea, and Red Bull).Ten composite Gradia Direct Anterior (GC, Tokyo, Japan) A2 shade tooth models, teeth 15-25, were made for each aligner.The most common tooth shade among individuals aged 20-40, who are the primary demographic for aligner therapy, is the A2 shade.This finding is supported by a study that evaluated tooth shade among a group of patients and found that A2 was among the most common shades, indicating its prevalence in a broad population range [27,28]. Beverage Preparation Use of standard commercial brands of Coca-Cola (Coca-Cola HBC Hrvatska, Zagreb, Croatia), black coffee (Franck jubilarna original, Franck d.d., Zagreb, Croatia), black tea (Franck d.d., Zagreb, Croatia), and Red Bull (Red Bull GmbH, Fuschl am See, Austria).A tea filter bag was added to 2 dL of hot water (90 • C) and brewed for 3 min, while the coffee was prepared as following: 2 full teaspoons of coffee was added to 1 dL of boiling water, mixed, and heated again gently until the foam rose.The beverages were left to cool down at room temperature.The samples were stored in a Cultura incubator (Ivoclar Vivadent, Schaan, Liechtenstein) at a temperature of 37 • C. To compensate for the loss due to evaporation, the solutions in which the samples were immersed were refreshed every 24 h throughout the experiment. Color Change Evaluation A standard VITA Easyshade compact colorimeter was used to check the color change, which was evaluated at 5 intervals: T0 (before immersing into the solution), T 1 after 24 h, T 2 after 48 h, T 3 after 72 h, and T 4 after 7 days.All the measurements were taken in the same room with a standardized light source. Standard measurements were performed by an investigator who was blind to the group division.The flat labial surface of upper tooth 15 to tooth 25 of each aligner was measured.A tooth model was made using composite resin Gradia Direct Anterior (GC, Tokyo, Japan) with an A2 shade in an aligner template isolated with glycerin (Vazelin, Balea, dm-drogerie markt, Karlsruhe, Germany).These models were used as the background reference and set behind the labial surface of each aligner (Figure 1).Balea, dm-drogerie markt, Karlsruhe, Germany).These models were used as the background reference and set behind the labial surface of each aligner (Figure 1). Color Change Rating The color change rating was determined with the help of the National Bureau of Standards (NBS) system to express color differences [29].The ΔE* value was converted into NBS units with the formula NBS = ΔE* × 0.92 to relate the magnitude of color change to the clinical relevance standard [30].NBS rating values are as follows: 0.1-0.5, extremely slight change; 0.5-1.5, slight change; 1.5-3.0,perceivable change; 3.0-6.0,marked changes; 6.0-12.0,extremely marked change; and 12.0 or more, change to another color. ATR-FTIR Fourier transform infrared spectroscopy (FTIR) spectra in the 4000-400 cm −1 range were collected using a Bruker Alpha FTIR spectrometer (Bruker Optics, Ettlingen, Germany) with an ATR accessory.Spectra are the results of 10 continuous scans at a resolution of 4 cm -1 [11].For instrument control and spectra manipulation, OPUS v7.0 software was used. Sample Size The a priori required sample size was 36 for eff.size 0.25, α err prob 0.05, power (1-β err prob) 0.8 with 4 groups and 4 points of measurement.So, 9 measurements were taken per period, per aligner, and per beverage.To be sure of achieving power, we measured a color change of 10 points on each aligner, in the middle of the labial surface of teeth 15-25.This calculation aligns with the recommendation for ensuring sufficient statistical power to detect meaningful effects in clinical and experimental research, especially in fields requiring precise outcome measurements from multiple groups over various time points [31]. Statistical Analysis Statistical analyses were performed using IBM SPSS Statistics software, version 29.0.1.0(IBM, New York, NY, USA) to assess the impact of beverage type, brand, and manufacturing method on the color change and stability of the tested materials.Descriptive statistics, including means and standard deviations, are presented to summarize the data.This comprehensive analysis included both within-subjects and between-subjects effects, aiming to uncover significant interactions and trends.A Box's Test was employed to check Color Change Rating The color change rating was determined with the help of the National Bureau of Standards (NBS) system to express color differences [29].The ∆E* value was converted into NBS units with the formula NBS = ∆E* × 0.92 to relate the magnitude of color change to the clinical relevance standard [30].NBS rating values are as follows: 0.1-0.5, extremely slight change; 0.5-1.5, slight change; 1.5-3.0,perceivable change; 3.0-6.0,marked changes; 6.0-12.0,extremely marked change; and 12.0 or more, change to another color. ATR-FTIR Fourier transform infrared spectroscopy (FTIR) spectra in the 4000-400 cm −1 range were collected using a Bruker Alpha FTIR spectrometer (Bruker Optics, Ettlingen, Germany) with an ATR accessory.Spectra are the results of 10 continuous scans at a resolution of 4 cm −1 [11].For instrument control and spectra manipulation, OPUS v7.0 software was used. Sample Size The a priori required sample size was 36 for eff.size 0.25, α err prob 0.05, power (1-β err prob) 0.8 with 4 groups and 4 points of measurement.So, 9 measurements were taken per period, per aligner, and per beverage.To be sure of achieving power, we measured a color change of 10 points on each aligner, in the middle of the labial surface of teeth 15-25.This calculation aligns with the recommendation for ensuring sufficient statistical power to detect meaningful effects in clinical and experimental research, especially in fields requiring precise outcome measurements from multiple groups over various time points [31]. Statistical Analysis Statistical analyses were performed using IBM SPSS Statistics software, version 29.0.1.0(IBM, New York, NY, USA) to assess the impact of beverage type, brand, and manufacturing method on the color change and stability of the tested materials.Descriptive statistics, including means and standard deviations, are presented to summarize the data.This comprehensive analysis included both within-subjects and between-subjects effects, aiming to uncover significant interactions and trends.A Box's Test was employed to check for equal covariances, a prerequisite for a general linear model (GLM).Moreover, the influence of the independent variables and their interplay was examined through multivariate tests, with Pillai's trace utilized to determine significance.The assumption of sphericity, crucial for a repeated measures ANOVA, was tested using a Mauchly's test.Any detected violations were corrected using the Greenhouse-Geisser adjustment.The threshold for statistical significance was set at p < 0.05. ATR-FTIR The ATR-FTIR method was performed for the compositional characterization of polymeric thermoformed and 3D-printed aligners before and after exposure to different beverages.The representative spectra of the control aligner samples of each brand are shown in Figure 2. Specifically, spectra were recorded from different areas of each aligner, including the incisor, canine, and molar regions.According to the FTIR spectra, Invisalign and Clear Correct are three-layer aligners both made of polymeric materials based on poly(ethylene terephthalate glycol) (PETG) and polyurethane (PU), but with a different layer sequence.The layer arrangement for Invisalign can be shown schematically as PU-PETG-PU, while Clear Correct has the order PETG-PU-PETG (Figure 2).The mentioned materials are identified based on their characteristic peaks, and a detailed assignment of their FTIR spectra was reported in previous studies [32,33], while the identification of the FTIR peaks for the Invisalign samples is presented in Table 1. Polymers 2024, 16, x FOR PEER REVIEW 5 of 18 for equal covariances, a prerequisite for a general linear model (GLM).Moreover, the influence of the independent variables and their interplay was examined through multivariate tests, with Pillai's trace utilized to determine significance.The assumption of sphericity, crucial for a repeated measures ANOVA, was tested using a Mauchly's test.Any detected violations were corrected using the Greenhouse-Geisser adjustment.The threshold for statistical significance was set at p < 0.05. ATR-FTIR The ATR-FTIR method was performed for the compositional characterization of polymeric thermoformed and 3D-printed aligners before and after exposure to different beverages.The representative spectra of the control aligner samples of each brand are shown in Figure 2. Specifically, spectra were recorded from different areas of each aligner, including the incisor, canine, and molar regions.According to the FTIR spectra, Invisalign and Clear Correct are three-layer aligners both made of polymeric materials based on poly(ethylene terephthalate glycol) (PETG) and polyurethane (PU), but with a different layer sequence.The layer arrangement for Invisalign can be shown schematically as PU-PETG-PU, while Clear Correct has the order PETG-PU-PETG (Figure 2).The mentioned materials are identified based on their characteristic peaks, and a detailed assignment of their FTIR spectra was reported in previous studies [32,33], while the identification of the FTIR peaks for the Invisalign samples is presented in Table 1.Aligners from TeraHarz TC85 and Clear-A, are made only of polyurethane, and even though these are single-layer materials, FTIR analysis revealed significant variations in the spectra depending on the position from which the samples were extracted.This variability can be attributed to the type of material since a similar anomaly, although to a lesser extent, was also found on the outer polyurethane layers of the Invisalign aligner. Analysis of the outer layer of samples exposed to different beverages showed that the least changes were observed for the ClearCorrect aligner.The exposure of this material to tea, coffee, and Red Bull did not cause any changes in the PETG spectrum compared to the control sample, while changes in the spectrum of the sample exposed to Coca-Cola are manifested through the appearance of a weak band at 1534 cm −1 and a broad and weak band in the region of the stretching vibration of N-H or O-H bonds. The spectra of the outer layer of the control sample of the Invisalign aligner as well as the samples treated with the selected beverages are shown in Figure 3. Aligners from TeraHarz TC85 and Clear-A, are made only of polyurethane, and even though these are single-layer materials, FTIR analysis revealed significant variations in the spectra depending on the position from which the samples were extracted.This variability can be attributed to the type of material since a similar anomaly, although to a lesser extent, was also found on the outer polyurethane layers of the Invisalign aligner. Analysis of the outer layer of samples exposed to different beverages showed that the least changes were observed for the ClearCorrect aligner.The exposure of this material to tea, coffee, and Red Bull did not cause any changes in the PETG spectrum compared to the control sample, while changes in the spectrum of the sample exposed to Coca-Cola are manifested through the appearance of a weak band at 1534 cm −1 and a broad and weak band in the region of the stretching vibration of N-H or O-H bonds. The spectra of the outer layer of the control sample of the Invisalign aligner as well as the samples treated with the selected beverages are shown in Figure 3.It was determined that the spectra of the control sample and the sample exposed to Red Bull did not differ, while the spectrum of the sample kept in tea showed a weak band at 730 cm −1 .On the other hand, the spectrum profiles of samples exposed to coffee and Coca-Cola show significant deviations from the control sample, which are manifested through the development of an intense band at 730 cm −1 , the appearance of new peaks, and their low resolution in the area of the vibrational modes of C-O-C bonds as a part of polyurethane ester linkage (~1350-1000 cm −1 ) as well as a decrease in the intensity of the bands at 1596 cm −1 and 1527 cm −1 which originate from C=C stretching vibrations in the aromatic ring and bending of the N-H group, respectively.In these samples, a weaker influence of hydrogen bonding was also observed, which is reflected as an increase in the intensity of the non-hydrogen-bonded carbonyl band at 1714 cm −1 and the appearance of a broad band of non-hydrogen-bonded N-H stretching vibrations in the range 3560-3400 cm −1 , with a simultaneous decrease in the intensity of the bands of their hydrogen-bonded forms.The spectra of the printed TeraHarz 85 and Clear-A aligners exposed to beverages as well as their control samples showed great variability and therefore cannot be compared.The spectra of the treated samples do not contain any bands originating from the used beverages.The observed changes in the spectra are most likely due to uneven polymerization. Manufacturing Method The manufacturing method significantly affects the color stability of aligners.Thermoformed aligners result in a lower mean color change (5.208) compared to 3D-printed aligners (11.376), with a mean difference of 6.168 (p < 0.001).This substantial difference in color change between the two methods was also reflected in the grand mean color change (8.292) for all samples.These results strongly suggest that the type of manufacturing method plays a crucial role in the color stability of aligners. Brand Further analysis revealed brand-specific impacts, with Tera Harz TC85 showing the greatest mean color change, significantly differing from ClearCorrect, which exhibited the lowest change (p < 0.001 across all brand comparisons).The mean differences between brands are presented in Table 3.The univariate tests for brand effect have a high η p 2 value (0.816), suggesting a strong association. Beverage Beverage impact was further corroborated by significant mean differences in pairwise comparisons, especially between coffee and the other beverages (p < 0.001), illustrating a strong association between beverage type and color change (F (3, 144) = 356.181,p < 0.001, η p 2 = 0.881) (Table 4).The data present a clear trend in color change (∆E*) over the four time points.There is a progressive increase in mean color change from time points 1 to 4, with significant differences between each consecutive time point (all p < 0.001 after Bonferroni adjustment).The multivariate tests corroborate these findings with high effect sizes (η p 2 = 0.692) and statistical power, indicating a strong time effect on color change. L*a*b Ratios in Color Change The effect of beverages on L*a*b parameters showed the following: Coca-Cola significantly had a higher L* ratio compared to coffee (mean difference = 0.12742, p = 0.009), while black tea had a significantly higher a ratio compared to coffee (mean difference = 0.06502, p = 0.003) and Red Bull (mean difference = 0.09164, p < 0.001), suggesting black tea caused more red/green coloration.Coffee had a significantly higher b ratio compared to black tea (mean difference = 0.14146, p = 0.002) and Coca-Cola (mean difference = 0.16363, p < 0.001), suggesting that coffee exposure led to a greater yellow/blue coloration.Table 5 presents the contribution of each parameter (L*, a*, and b*) to the cumulative color change (∆E*) of aligners from four brands when exposed to different beverages. Color Change Rating Using the National Bureau of Standards (NBS) system to quantify color changes, the data revealed that coffee was the most potent staining agent across all brands, producing the highest NBS values, indicative of 'extremely marked changes' in color (Table 6).In particular, Tera Harz TC85 aligners were most affected by both coffee and black tea, showing NBS values that suggest drastic color transformations.Invisalign and ClearCorrect aligners were relatively more resistant, although coffee still resulted in 'marked changes' for ClearCorrect and 'extremely marked changes' for Invisalign.Black tea, Coca-Cola, and Red Bull also caused noticeable color changes across brands, with the impact of Red Bull ranging from 'perceivable' to 'marked changes'.When considering the effects of black tea, Tera Harz TC85 aligners were notably susceptible, undergoing an extreme color change after 48 h.Clear-A aligners followed, with a similar level of color change noted at 78 h of exposure.Thermoformed aligners, in contrast, displayed superior resistance, showing no extreme color change even after 7 days of immersion in black tea. Discussion Color stability is essential for the aesthetic appeal and patient satisfaction of orthodontic aligners.To achieve this, clear aligner materials must be exceptional at transmitting light, ideally allowing more than 80% of visible light to pass through for maximum clarity.The materials of choice for these aligners are amorphous thermoplastic polymers, valued for their high translucency compared to the visually less-appealing, opaque crystalline polymers.Polymers such as polyurethane, polyester, poly(vinyl chloride), polysulfone, and polycarbonate are particularly favored for their beneficial optical properties [16].Clear aligners' consistent transparency and aesthetic appeal are crucial for their popularity [16].Despite challenges like discoloration from consuming colored beverages, UV light exposure, and mouthwash use, these aligners are designed to maintain their clarity for one to two weeks of oral use, ensuring they meet the demands for both appearance and functionality [34].Despite medical advice to remove aligners before eating or drinking anything other than water to prevent staining, research shows that a significant number of patients disregard these guidelines.They continue to eat and drink with their aligners on, compromising their transparency and, as a result, their aesthetic appearance.In fact, one study found that nearly half of all patients chose not to remove their aligners when consuming food and beverages [16,34]. Our analysis highlighted significant variations in color change, influenced by the brand of the aligner, the type of beverage and the manufacturing process used.The method used to manufacture aligners significantly influences their color stability, with thermoformed aligners exhibiting less susceptibility to color changes compared to those made using 3Dprinting technologies.Thermoforming, a process where a plastic sheet is heated to a pliable forming temperature, formed to a specific shape in a mold, and then trimmed to create a usable product, tends to preserve better the original color integrity of the material [38,39].This preservation is attributed to the uniform material distribution achieved during the thermoforming process, which could minimize the exposure of the polymers to conditions that could predispose them to discoloration.In contrast, 3D printing, which involves the layer-by-layer addition of material to build the final product, might introduce microporosities or variations in the material that increase its propensity to absorb pigments from food, drinks, and other external agents, leading to a higher degree of color change over time [8,22,40]. Furthermore, 3D-printed polyurethane materials may exhibit more staining compared to their thermoformed counterparts due to differences in surface characteristics and material properties arising from their respective manufacturing processes.Specifically, 3D printing often results in parts with higher surface roughness and porosity, which could trap staining agents more easily, whereas thermoforming tends to produce parts with smoother and denser structures that are less prone to staining [41].The formulations of polyurethane used in 3D printing might also differ from those in thermoforming, with additives in 3D-printing materials potentially affecting stain resistance.Additionally, the thermal history and microstructure from the manufacturing processes could influence the material's stain resistance, with 3D-printed materials potentially having more reactive sites for staining due to rapid cooling and layer-by-layer construction.Furthermore, chemical exposure during 3D printing could alter surface properties, impacting stain interaction [42][43][44]. Further scrutiny into the effects of different brands on the color stability of orthodontic aligners has brought to light distinct disparities in how various materials react to potential staining agents.In this analysis, it was discovered that aligners made from Tera Harz TC-85 resin underwent the most substantial mean color change when exposed to staining substances compared to Clear-A, Invisalign, and ClearCorrect.This finding starkly contrasts with the performance of ClearCorrect aligners, which demonstrated minimal color alteration among the brands tested.Various studies have concluded that polyurethane is more susceptible to pigment adsorption and does not provide adequate color stability [36].This significant variation underscores the influence of material composition and the proprietary manufacturing processes employed by each brand on the aligners' susceptibility to discoloration. The gradual escalation in the average color change of orthodontic aligners across different time intervals, demonstrating a steady progression in discoloration, is supported by the findings of several studies.For instance, Liu et al. [16] evaluated the color stabilities of three types of orthodontic clear aligners exposed to staining agents and observed slight color changes after short-term exposure, with significant differences in color change (∆E*) after longer exposures, indicating a continuous and measurable deterioration in the aligners' appearance over time. Furthermore, Venkatasubramanian et al. [30] conducted an in vitro study examining how clear aligners changed color upon exposure to various indigenous food products.The study found that the hue of the aligners noticeably changed when exposed to substances like turmeric, saffron, Kashmiri red chili powder, and coffee at both 12 and 24 h intervals, reinforcing the trend of worsening color stability over time.This study's findings regarding the impact of beverages on the color stability of orthodontic aligners are supported by existing research, which indicates significant variances in how different beverages affect aligner materials.Coffee, in particular, has been identified as a major culprit in inducing color change across various types of orthodontic appliances and materials, such as aesthetic ceramic brackets, adhesive samples, and aligner mate-rials [34,45].Moreover, Liu et al. [16] investigated the color stability of three types of orthodontic clear aligners exposed to staining agents, including coffee.They found that Invisalign aligners stained with coffee exhibited significantly higher color changes compared to other beverages, highlighting the beverage-specific impact on aligner aesthetics, particularly the detrimental effect of coffee, which is confirmed in our study. The L*a*b color system is designed to encompass all perceivable colors, where 'L*' represents lightness, 'a*' denotes the spectrum from green to red, and 'b*' captures the spectrum from blue to yellow.Through this analysis, it was observed that exposure to black tea resulted in a noticeable shift toward redness in the aligners, as indicated by an increase in the 'a*' value.This suggests that compounds in black tea, such as tannins, have a specific effect on the aligner material that accentuates red hues [34].Conversely, the impact of coffee on aligners was distinctly different, leading to an increase in the 'b*' value, which signifies a shift toward more yellow tones.The yellowing effect caused by coffee can be attributed to the presence of chromogens and other staining molecules in coffee that have a strong affinity for the aligner material, embedding within and altering its intrinsic color to a more yellow shade [16].Considering that the FTIR spectra of the treated samples in this study do not contain any bands originating from the used beverages, an additional explanation could be that PU materials are susceptible to yellowing when exposed to ultraviolet (UV) radiation and oxygen, a phenomenon attributed to the presence of nitrogen atoms within their structure.This yellowing process is a result of photochemical degradation, which involves the scission of the urethane group and photooxidation of the central methylene group situated between aromatic rings.These reactions lead to the formation of quinone structures, which are yellow chromophoric reaction products, thus causing the PU surface to yellow.The process is quantifiable by measuring changes in the CIELab* color components, where a systematic tendency toward higher values with increasing irradiation time is observed, indicating a greater degree of yellowing.This degradation is correlated with an increase in carbonyl group concentration, further evidencing the chemical changes occurring within the PU material under UV exposure [46]. The CIELab* color system is widely endorsed for assessing color changes in dentistry, as it reflects human perception [47].It is generally agreed among researchers that color alterations with a ∆E* value of 3.7 or higher, as measured by a spectrophotometer, are noticeable to the naked eye or clinically unacceptable [48,49].Therefore, in this study, any color change values below 3.7 were deemed satisfactory, which was present only in Invisalign after 7 days of immersion in Coca-Cola, and ClearCorrect in black tea and Coca-Cola. To address the staining issues caused by beverages on orthodontic aligners, various cleaning methods have been investigated for their effectiveness and impact on the thermoplastic materials used in aligners [50][51][52][53].Mechanical brushing, often recommended for daily hygiene, can remove surface stains but must be performed gently to avoid microscratches that could harbor bacteria and increase staining over time.Chemical cleaners, such as hydrogen peroxide-based solutions or specialized orthodontic cleaning tablets, offer an alternative that can reduce staining without physical abrasion.Studies have shown that these chemical agents can effectively minimize discoloration without significantly altering the mechanical properties of the aligners, such as tensile strength or elasticity.However, the excessive use of harsh chemicals should be avoided as they may cause brittleness or unwanted changes in the aligner material over extended periods.Ultimately, the choice of cleaning method should balance effectiveness in stain removal with the preservation of the aligner's integrity and comfort for the patient [50][51][52][53]. The significant changes observed in the spectrum profiles of polyurethane samples exposed to coffee and Coca-Cola, including the development of an intense band at 730 cm −1 , the appearance of new peaks, and changes in the vibrational modes of C-O-C bonds as a part of polyurethane ester linkage, indicate alterations in the material's chemical structure due to exposure to these substances.These changes could lead to alterations in the physical and mechanical properties of the polyurethane, such as its flexibility, strength, and durability [54][55][56].The decrease in the intensity of the bands associated with C=C stretching vibrations and the bending of the N-H group suggests a weakening of these chemical bonds, potentially leading to decreased material robustness [57,58]. A reduction in hydrogen bonding, as indicated by changes in the intensity of bands associated with non-hydrogen-bonded forms, could affect the material's thermal stability and water resistance.Polyurethane's resistance to environmental factors such as temperature and moisture is crucial for its performance in various applications, from medical devices to coatings and insulations [59][60][61][62]. In clinical settings, materials like polyurethane are often chosen for their specific properties, such as biocompatibility, strength, and durability.Changes in these properties due to chemical exposure could impact the safety and efficacy of medical devices made from polyurethane.For instance, alterations in the material's chemical structure could potentially lead to increased degradation rates, affecting the longevity and performance of implanted devices or coatings used in medical applications [63,64]. While our study offers valuable insights into the color and chemical stability of polyurethane-based aligners exposed to common beverages, it also presents several limitations that warrant consideration.Our methodology focused on a static evaluation of beverage-induced staining without considering the mitigating effects of daily cleaning routines.This aspect limits the applicability of our findings to real-life scenarios where aligners are regularly cleaned by users, potentially influencing the degree of discoloration experienced.The reliance on the CIELab* color difference formula over CIEDE2000 in dental research, primarily driven by its historical acceptance and straightforward methodology for quantifying color differences, introduces a limitation in terms of accurately capturing the nuances of human color perception.While CIELab* offers simplicity and wide applicability, it may not always reflect the perceptual color differences as accurately as the more sophisticated CIEDE2000 formula, especially in scenarios where a high degree of color discrimination is required.Therefore, future work should aim to establish an evaluation framework that encompasses both advanced colorimetric assessments, as well as an analysis of surface roughness and mechanical characteristics, since they affect aesthetics and functional longevity. Conclusions This study highlights that the difference in performance was notable between manufacturing methods, with 3D-printed polyurethane aligners showing more significant staining than thermoformed ones.Such disparities underscore the importance of manufacturing techniques in determining the resilience of aligners to staining substances like coffee.Further, our findings reveal that aligners (ClearCorrect) incorporating an outer layer of PETG demonstrate superior resistance to staining and chemical alterations compared to those fabricated entirely from polyurethane, which are more vulnerable to damage.These findings highlight that aligners with a PETG outer layer could offer a more stable option for those seeking to maintain the aesthetic quality of their orthodontic appliances. Figure 1 . Figure 1.Scheme of teeth models and measurement. Figure 1 . Figure 1.Scheme of teeth models and measurement. Figure 2 . Figure 2. Material characterization: A-outer vestibular layer, B-outer layer in contact with teeth, C-middle layer.Figure 2. Material characterization: A-outer vestibular layer, B-outer layer in contact with teeth, C-middle layer. Figure 2 . Figure 2. Material characterization: A-outer vestibular layer, B-outer layer in contact with teeth, C-middle layer.Figure 2. Material characterization: A-outer vestibular layer, B-outer layer in contact with teeth, C-middle layer. Figure 3 . Figure 3.The spectra of the vestibular outer layer of the control sample of the Invisalign aligner as well as the samples treated with the selected beverages.Figure 3. The spectra of the vestibular outer layer of the control sample of the Invisalign aligner as well as the samples treated with the selected beverages. Figure 3 . Figure 3.The spectra of the vestibular outer layer of the control sample of the Invisalign aligner as well as the samples treated with the selected beverages.Figure 3. The spectra of the vestibular outer layer of the control sample of the Invisalign aligner as well as the samples treated with the selected beverages. Figures 4 - Figures 4-7 show aligners after 24, 48, 72 h, and 7 days of immersion.Three-dimensionalprinted aligners experienced a drastic color transformation to a different color when exposed to coffee, with this extreme change occurring within just 24 h.Invisalign aligners, while more resistant, still reached this level of color change after 48 h of coffee immersion.Notably, ClearCorrect aligners showcased remarkable resistance, with no such extreme color change observed even after 7 days in coffee.Polymers 2024, 16, x FOR PEER REVIEW 11 of 18 Table 1 . FTIR peak identification for the Invisalign samples. Table 2 . Descriptive statistics (mean and standard deviation (SD)) of ∆E* across 5 time points and among different beverages and brands. Table 3 . The mean differences in color change ∆E among brands. Table 4 . The mean differences in color change ∆E among beverages. Table 5 . The contribution of each parameter (L*, a*, and b*) to the cumulative color change (∆E*) of aligners from four brands exposed to different beverages. Table 6 . Color change rating among brands and beverages.
v3-fos-license
2019-03-08T14:17:19.617Z
2019-02-01T00:00:00.000
73420657
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1097/md.0000000000014540", "pdf_hash": "0c81b5dfcf23958ef88db086f89415c003b01321", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1291", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0c81b5dfcf23958ef88db086f89415c003b01321", "year": 2019 }
pes2o/s2orc
Efficacy of tolvaptan for chronic heart failure Abstract Background: The protocol of this study will be proposed for systematic evaluation of the efficacy and safety of tolvaptan in the treatment of chronic heart failure (CHF). Methods: We will retrieve the following electronic databases for randomized controlled trials assessing the efficacy of tolvaptan in patients with CHF: PubMed, Embase, Cochrane Central Register of Controlled Trials, Web of Science, Scopus, Chinese Biomedical Literature Database, China National Knowledge Infrastructure, VIP Information, and Wanfang Data. Each database will be retrieved from inception to February 1, 2019 without any limitations. The entire process of study selection, data extraction, and methodological quality evaluation will be conducted by 2 independent authors. Results: The protocol of this proposed study will compare the efficacy and safety of tolvaptan in the treatment of patients with CHF. The outcomes will include all-cause mortality, change in body weight, urine output, change in serum sodium; and incidence of all adverse events. Conclusion: The findings of this proposed study will summarize the current evidence of tolvaptan for CHF. Ethics and dissemination: All data used in this systematic review will be collected from the previous published trials. Thus, no research ethics approval is needed for this study. The findings of this study will be published at a peer-reviewed journal. PROSPERO registration number: PROSPERO CRD42019120818. Introduction Chronic heart failure (CHF) is one of the most serious cardiovascular diseases all over the world. [1][2][3][4] It often causes a series of cardiac dysfunctions, such as ejection dysfunction, decreased cardiac output, and increased intracardiac pressure function. [5][6][7] Most importantly, this condition also results in high mortality, with about 50% within 5 years. [8] Epidemiological studies have reported that its prevalence is about 1 to 2% among general population. [9,10] The huge cost of CHF treatment also brings the greatest burden for both families and society. [11][12][13] Several previous trials have reported that tolvaptan can be used to treat heart failure effectively. [14][15][16][17][18][19][20][21][22] Although one systematic review has assessed efficacy and safety of tolvaptan for the treatment of acute heart failure, [23] no systematic review and meta-analysis has been conducted to specifically explore the efficacy and safety of tolvaptan for CHF based on many published clinical trials of CHF. [24][25][26][27][28] Therefore, in this proposed protocol of systematic review, we will specifically investigate the efficacy and safety of tolvaptan for the treatment of CHF. Methods 2.1. Inclusion criteria for study selection 2.1.1. Study types. This proposed study will include randomized controlled trials (RCTs) that have assessed all types of tolvaptan for the treatment of patients with CHF. No restrictions of the location, time, and language of published papers will be applied. However, we will not consider non-clinical studies, case studies, non-RCTs, and quasi-RCTs. 2.1.2. Participants. All participants must be clinically diagnosed as CHF, and will be considered without any limitations of race, gender, and age. Interventions. The experimental group must have been treated with tolvaptan alone. The control group must have been treated with other therapies, but not any types of tolvaptan. Outcomes. The primary outcome includes all-cause mortality. The secondary outcomes consist of change in body weight, urine output, change in serum sodium; and incidence of all adverse events. Strategy of literature retrievals The literature search will be mainly based on the electronic databases of PubMed, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Web of Science, Scopus, Chinese Biomedical Literature Database, China National Knowledge Infrastructure, VIP Information, and Wanfang Data from inception to February 1, 2019 without any limitations. Additionally, clinical registry websites, and reference lists of included trials will also be retrieved. The sample of detailed search strategy for CENTRAL has been built and showed in Table 1. Similar search strategy will also be applied to the other electronic databases. 2.3. Data extraction and methodological quality evaluation 2.3.1. Study selection. Two independent authors will initially scan titles and abstracts of all potential studies. Then, full texts will be further read if there will be insufficient information to judge the study according to the inclusion criteria. Disagreements will be solved by discussion with other authors. The flowchart of study selection is demonstrated in Figure 1. Data extraction and management. Two independent authors will conduct data extraction according to the predefined standard data extraction form. The divergences will be resolved by consulting other authors. The form comprises of the following information: title, 1st author, year of publication, diagnosed criteria, inclusion and exclusion criteria, sample size, details of randomization, allocation, and blinding, intervention details, and outcomes. Any insufficient or missing data will be inquired by contacting the primary authors through email. Methodological quality evaluation. We will evaluate the methodological quality in each study by using Cochrane Risk of Bias Tool. Two independent authors will evaluate methodological quality for each study. Any divisions will be settled down by discussion with other authors. Statistical analysis We will use ReMan 5.3 software to pool and analyze the data. Continuous outcome data will be synthesized and presented as mean difference or standardized mean difference with 95% confidence intervals (CIs). Dichotomous outcome data will be synthesized and shown as risk ratio with 95% CIs. Heterogeneity among included studies will be identified by I 2 test. The I 2 50% is set as having reasonable heterogeneity, and a fixed-effect model will used to pool and analyze the data. The I 2 >50% is considered as having significant heterogeneity, and a random-effect model will be utilized to pool and analyze the data. Then, subgroup analysis will be conducted to detect the possible reasons that may account for high heterogeneity. It will be carried out according to the different study characteristics, types of treatments, and outcome measurements. If it does not work, the pooled data and meta-analysis will not be performed. Instead, a narrative summary will be reported. Additionally, sensitivity analysis will be carried out to identify the robustness of pooled results by removing low quality trials. Moreover, unit of analysis will be considered to conduct if crossover studies included, and only 1st period of study data will be pooled and analyzed. Finally, reporting biases will also be performed by using funnel plot [29] and Egg's regression, [30] if more than 10 studies are included. Discussion The CHF is a very server cardiovascular disease, and often result in very high mortality. Tolvaptan is reported to treat heart failure effectively, especially for acute heart failure. However, no systematic review has addressed its efficacy and safety for the treatment of patients with CHF, although lots of high quality clinical trials have been published. [24][25][26][27][28] Thus, this study will firstly and systematically explore the efficacy and safety of tolvaptan for CHF. It will provide the first rigorous summary evidence of tolvaptan for CHF across all published RCTs. The data pooled results will provide a better understanding of efficacy and safety of tolvaptan for patients with CHF. Its findings will inform our understanding of the value of tolvaptan in treating CHF outcomes. Additionally, it may also provide helpful evidence for clinical practice and future studies. Acknowledgments This study is supported by the Heilongjiang Provincial Health Department Scientific Research Project (2011-354). The funder had no role in the design, execution, or writing of the study.
v3-fos-license
2019-08-03T00:04:52.495Z
2019-07-09T00:00:00.000
262264749
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ccsenet.org/journal/index.php/ijbm/article/download/0/0/40061/41152", "pdf_hash": "b5c242d0789d3ee0b366c87d04f9aeecf8e941ce", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1292", "s2fieldsofstudy": [ "Economics", "Business" ], "sha1": "b5c242d0789d3ee0b366c87d04f9aeecf8e941ce", "year": 2019 }
pes2o/s2orc
The Effect of Macro Economic Variables on Stock Return of Companies That Listed in Stock Exchange : Empirical Evidence from Indonesia This study aims to analysis the effect of macroeconomic variables on the overall return of company shares which is a proxy with changes in the composite stock price index. This study uses secondary data in a period of 20 months from November 2016 to June 2018. While the analysis technique uses multiple linear regression This study found that macroeconomic variables consisting of inflation rates, interest rates, money supply, and foreign exchange rates, stock returns have a significant effect on companies on the Indonesia Stock Exchange. Keyword: macro economy, agency theory, financial management Introduction Most research shows that inflation has a significant impact on stock returns.Whether the impact is positive or negative, however, is a matter of much debate.Chen et al (2005) concluded that inflation cannot predict stock returns.According to Tripathi and Kumar (2014), the relationship between inflation and stock returns in BRICS is contradictory, with Russia showing a significant negative relationship, while India and China show a significant positive relationship.Priyono, (2016Priyono, ( , 2018Priyono, ( , 2019)), Priyono, Briyan Cadalora Putra Cisa Cadalora Putri, (2019), states that there are two traditional peak annual inflation in Indonesia.The December-January period always brings higher prices due to Christmas and New Year celebrations, while traditional flooding in January (in the middle of the rainy season) results in disruption of distribution channels in several regions and cities, resulting in higher logistical costs.The second peak of inflation occurred in the July-August period.Inflationary pressure in these two months came as a result of holidays, Muslim holy fasting month, Eid al-Fitr and the beginning of the new school year.Significant improvements can be detected in expenditures on food and other consumables (such as clothing, bags and shoes), along with retailers who adjust prices upwards. From the research results of Priyono, Briyan Cadalora Putra Cisa Cadalora Putri , (2016, 2018, 2019), it was stated that the social implications of policy makers could apply empirical evidence in the time series as a theoretical foundation while establishing fiscal, monetary or exchange rate policies to stabilize output and employment opportunities using interest rates, the amount money supply, and exchange rates in other cities.Abedallat and Shabib (2012) studied the impact of macroeconomic indicators such as changes in investment and gross domestic product (GDP) as independent variables and the movement of the Amman Stock Exchange index as the dependent variable for the 1990-2009 data period.They found the relationship between the two macroeconomic indicators (investment and GDP) and the Amman Stock Exchange index, and also between each separately and the stock index, which means that price movements on the Amman Stock Exchange are influenced by the movements of these two variables, and there are influences both of these variables towards the Amman Stock Exchange index movement.Furthermore, they found the impact of investment changes was greater than the impact of changes in GDP on the Amman Stock Exchange index.Gunsel and Cukur (2007) analysis the effect of macroeconomic factors on London stock returns for the period between 1980 and 1993.They developed seven predetermined macroeconomic variables.The term structure of interest rates, risk premiums, exchange rates, money supply, and unanticipated inflation, sectoral dividend yields and sectoral unexpected production are used as independent variables and returns on London shares as the dependent variable.The results show that macroeconomic factors have a significant influence on the UK stock market but; each factor can affect different industries in different ways.That is, macroeconomic factors can affect one industry positively, but negatively affect other industries Interest rates are defined as the price of money.This is the proportion of loan funds demanded by investors for the use of these funds.Many governments use interest rates as a monetary policy tool to control other macroeconomic variables such as investment, inflation and unemployment.Alam and Uddin (2009) found that interest rates have a significant negative relationship with stock prices for 15 developed and developing countries using data from 1988 to March 2003.According to Humpe and Macmillan (2007), stock prices are negatively correlated with the long term interest rates in the US and Japan. The stock market is very important in the economic development of an economy given its role as an intermediary between borrowers and lenders.The stock market is very important in Indonesia mobilizing long-term capital to listed companies by collecting funds from different investors to enable them to expand their business, and offering investors alternative investment paths to enter their surplus funds.In addition, the level of economic stock market development is the main factor in determining overall financial development and sustainability (Ashaolu & Ogunmuyiwa, 2010).A well-functioning stock market contributes to economic development through more efficient allocation of resources and increasing savings (Junkin, 2012). Previous studies concluded that changes in stock prices are related to macroeconomic factors.According to Liu and Shrestha (2008), a country's macroeconomic activity has an effect on stock market returns.Muradoglu et al (2000) show that changes in stock prices are related to macroeconomic behavior in developed countries.The Arbitrage Price Theory (APT) championed by Stephen Ross (1976) also provides a theoretical framework for the relationship between stock prices and macroeconomic fundamentals by modeling them into linear functions in which sensitivity to changes in each factor is represented by beta-specific factors.Stock prices, therefore stock returns are generally believed to be determined by some fundamental macroeconomic variables such as interest rates, inflation, exchange rates, and Gross Domestic Product (Kirui, Wawire and Ono, 2014). From some of the studies above the authors are interested in analyzing themed studies: The Effect Of Macro Economic Variables On Stock Return Of Companies That Listed In Stock Exchange : Empirical Evidence From Indonesia. Furthermore, this study can be formulated as the following hypothesis: 1.It is assumed that the inflation rate has an effect on Stock Return 2. It is assumed that the interest rate has an effect on Stock Return 3. Suspected money supply has an effect on Stock Return 4. Exchange rate is expected to have an effect on Stock Return Data and Sample The study used macroeconomic data reported online by related institutions such as Indonesian banks, Indonesia Stock Exchange, Central Bureau of Statistics, and others.The macroeconomic data used consisted of the inflation rate, the average interest rate prevailing in the bank at the time of observation, the amount of money in circulation, the development of the foreign exchange or the US dollar against the value of the IDR, and the development of the overall stock return of listed companies in the Exchange Effect Indonesia. This study uses monthly time series data from November 2016 to June 2018 or 20 months of observation.The reason for using this timeframe is mainly to explain the current phenomenon which can be explained in the previous few months, since using long-term data several years earlier is less realistic than the short monthly period as conducted in this study. Variable Measurement Variable used in this study consisted of the dependent variable of the overall return rate of firms in Indonesia Stock Exchange, and four independent variables consisting of inflation rate, interest rate, amount of money in circulation, and US dollar exchange rate against rupiah value, as stated below . (a) The dependent variable of the company's stock return rate (YR i ) as a whole company listed on the Indonesia Stock Exchange, in proxy by using the composite stock price index or JCI between the time that is the growth rate Mode Where: Y exchange Analy The empi influence price of th each com in the Ind The exch shares as exchange of the ret investors Descr The struc namely: between variables is smaller than 1%, which means that the hypothesis proposed in this study has been appropriate and supported by the calculation of statistical test t. Discussion Based on hypothesis as it is evidenced on the results calculation statistic and pay attention results research earlier as put forward before, then need discussed some p Relevant principal with the role of macro variables economy in influence stock return whole companies in Indonesia Stock Exchange, namely : (a) Hypothesis about the effect of macroeconomic variables made from inflation rate, interest rate , amount money outstanding, and currency foreign proven take effect significant to stock return whole companies in the Indonesian Stock Exchange.It is in line with hypothesis above and relevant with results research earlier as it is findings reported at Chen et al (2005), Gunsel & Cukur (2007), Abedallat & Shabib (2012). (b) Research this find that in period short and medium, it turns out level inflation take effect positive against stock return company.It is happen because support of government as it is done moment this to development industry and business world, have an impact to enhancement chance work or push number poverty.Because of unemployment decreased, then as it is Philips theory proves that inflation negative compared with unemployment, because happen enhancement spending consumption home stairs and triggers enhancement inflation, so otherwise when unemployment increased then inflation decrease because community experience difficulty finance and demand decrease although price tend more cheap.The implication, which is inflation is relatively controlled push increase in stock return, so required ethers money policy of the Central Bank and fiscal policies by the government for inflation located position control.The results of this study support the research results of Priyono, (2016Priyono, ( , 2018Priyono, ( , 2019)), Priyono, Briyan Cadalora Putra Cisa Cadalora Putri, (2019) (c) Implications of interest policy is authority monetary have the role of macroeconomic variable this no impact bad to development of a marked business world with declining stock returns company.Variable interest rate have a negative effect on stock return because with increased interest will push amount investment of industry and business sector, and this means also will improve amount unemployment and pushing growth number poverty.This is where role authority monetary for on wise in use interest rate reference that can influence the bank interest overall. The results of the study this in line with theory investment, which states that interest rate negatively correlated with investment, or investment is function from interest rate and have a slope or a negative curve, which means that on the interest rate high will cause decline amount investment, otherwise of low interest will push growth investment.If investment experience increase, then the business world increasingly growing, the amount unemployment down, numbers poverty increasingly low, and stock returns company tend experience improvement.this is in line with the opinion: Priyono, Briyan Cadalora Putra Cisa Cadalora Putri, (2019) (d) amount money outstanding impact positive against stock return, inside period short and long term medium happen growth opportunities business because banking push enhancement funding investment industry and business world, so performance finance company increasingly increase.Enhancement profitability or performance finance industry and business world, and convenience earn funding banking push increase of stock return companies in the Stock Exchange.Implications policy of banking , that is push interest community in optimizing use its revenue, through various policy amenities credit so transaction business increased because community chance use future income for used this moment. (e) The exchange rate currency foreign have a negative and significant effect against stock return companies in the Securities Exchange are meaningful that point a weakening exchange will cause loss company certain such as : the company that owns debt overseas or debt with value currency foreign, using ingredients default from abroad, imports capital goods, and others.The implication, that investors in the Stock Exchange respond value exchange currency foreign because in general increase value currency foreign, will pressing stock return company on overall characterized with decline index price stock Composite Indonesia Stock Exchange. Conclusion and Suggestion The results of the study this concluded: (a) In period short and medium, inflation rate take effect positive and significant to stock return of companies on the Indonesia Stock Exchange, which means that increase inflation push increase price stock company.Suggested for monetary policy by central banks and fiscal policies by the government control inflation, because level of inflation the give signal positive for investors in the capital market, so push taking investor decisions within determine its investment portfolio in the Capital Market. (b) interest rate have a negative and significant effect to stock return of companies on the Indonesia Stock Exchange, which means that increase interest rate push price stock companies in the Stock Exchange.Suggested for monetary policy to interest rate reference do with take into account the impact to macroeconomic, because that policy very sensitive to investor decisions in the capital market, even in form information about plan increase interest rate reference will trigger movement price shares in the Stock Exchange. (c) Numbers money outstanding take effect positive and significant to stock return of companies on the Indonesia Stock Exchange, which means that increase amount money outstanding will push increase price stock companies in the Stock Exchange.It is suggested that the central bank control amount money outstanding, due to macroeconomic variables this significant influence against stock return of company on the Stock Exchange, and if this ignored, then impact multiplier will happen on systemic in resonance national level. (d) Exchange currency foreign have a negative and significant effect to stock return of companies on the Indonesia Stock Exchange, which means that increase value exchange currency foreign will impact to decline price shares in the Stock Exchange.It is suggested that the government control balance sheet transaction walk through share policy such as: push enhancement export, limit import goods consumptive, supervising and controlling debt outside undertaken by the government, state-owned and private cause deficit balance sheet transaction walking . Implication Implications for research next is could do research similar with expand period range time for example 5 to 10 years, and use additional macroeconomic variables others that have not used in this study, so could complete this research.
v3-fos-license
2024-02-07T16:20:25.789Z
2024-02-01T00:00:00.000
267505762
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/21/2/176/pdf?version=1706952389", "pdf_hash": "4022f5c8492a028b6b64646d24f0db7846133057", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1295", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "sha1": "3a17ca79127494fac47001771d38ebef3179ff58", "year": 2024 }
pes2o/s2orc
Associations of Broader Parental Factors with Children’s Happiness and Weight Status through Child Food Intake, Physical Activity, and Screen Time: A Longitudinal Modeling Analysis of South Korean Families This study investigated how broader parental factors including parental happiness, parental play engagement, and parenting stress are related to Korean children’s happiness and weight status across three years via indirect pathways through the children’s energy-related behaviors of healthy and unhealthy food intake, physical activity, and screen time. Data from 1551 Korean parent pairs and 7-year-old children in the Panel Study on Korean Children were analyzed. A path analysis and gender-based multi-group analysis were conducted. Maternal happiness was negatively related to child screen time. Maternal play engagement showed positive concurrent associations with child healthy food intake and physical activity and negative associations with screen time. Maternal parenting stress was negatively related to child healthy eating. There was one significant finding related to fathers’ role on children’s energy-related behaviors, happiness, and weight status: the positive association between parental happiness and boys’ unhealthy food intake. Child screen time was positively related to child weight status and negatively to child happiness at each age. Broader maternal parenting factors can serve as a protective factor for childhood happiness and weight status in 7-to-9-year-olds through being associated with a reduction in child screen time. Introduction Childhood is a prime time to build healthy habits that nurture the foundation of a healthy and happy lifestyle [1].Yet, numerous children adopt detrimental habits before they transition into adolescence [2].These habits encompass an insufficient consumption of fruits and vegetables, overindulgence in energy-dense foods laden with sugars and fast food, reduced physical activity (PA), and excessive screen time [3].Such energy-related behaviors (ERBs) may have broad effects on children's health and happiness.Poor ERBs not only give rise to immediate health concerns, but also to obesity and overweight body mass index (BMI), which can often persist into adulthood [4].Childhood obesity is a major public health concern as it has rapidly exacerbated in the past decades from 4% in 1975 to over 18% in 2016 worldwide [5].South Korea (Korea, hereafter) is not an exception as the childhood obesity rate has increased from 9% in 2007 to 19% in 2021 [6]. Childhood happiness is a critical component of child well-being.The strong association between a close parent-child relationship and child happiness is well established [7].Also, there is a general belief that PA is linked to happiness in children, while excessive screen time is associated with reduced happiness, often manifesting as higher levels of mental distress [8].Likewise, a body of studies reports an association between the intake of fruit and vegetables and enhanced well-being in adults [9], but more research is clearly needed especially in the formative childhood period [10].Of concern is that Korean children report comparatively low levels of happiness, placing them at the bottom of the 22 Organization for Economic Cooperation and Development countries [11].Therefore, it is important to gain a deeper understanding of mechanisms that may contribute to childhood happiness, of which ERBs may be one set of candidates. Parental Correlates of Children's Energy-Related Behaviors In childhood, at least prior to teenage years, parents are the primary influence on their children's ERBs [12].Much previous research has focused on specific strategies that parents use to influence their children's ERBs, such as modeling exercise, restricting access to energy-dense snacks, and providing specific feedback on food choices [13].However, recent studies have focused on broader approaches that parents use to provide parenting more generally that are not specifically targeting the children's ERBs.Examples of such broader parenting factors are fostering cohesion in the family, applying the authoritative/democratic parenting style, and general monitoring of children [14].Parents contribute to creating a nurturing emotional atmosphere in the home based on their own perceptions and experiences, particularly during children's formative years [15], that are in part reflected by their approach to their children's ERBs.For instance, a positive and supportive household environment can encourage health-promoting behaviors among children [16,17], while a stressful or negative environment can lead to unhealthy coping strategies such as emotional eating, overeating, or excessive screen time [18].Understanding the role of broader parental factors can provide important insight into the potential barriers or facilitators to promoting healthy ERBs in children and can help inform interventions' aims at improving the family environment and in turn children's ERBs.Whereas a range of broader parental factors can be hypothesized to play a role in children's ERBs, the current study focused on three: parental happiness, parental play engagement, and parenting stress. Parental Happiness Parental subjective happiness warrants attention for its influence on children's development and health as happier parents possess better psychological resources, which enable the use of their emotional and social capabilities to provide a positive and warm home environment [19].Because happiness is related to better physical and psychosocial well-being [20], happier parents tend to sustain the child's development and emotional security [21].Despite its link to numerous positive outcomes in children, research into links between parental subjective happiness and children's ERBs is lacking thus far.We hypothesize that parental happiness is associated with more healthy and less unhealthy ERBs in their children. Parental Play Engagement Another important positive broader parenting factor is parents' engagement in play with their children.Play involves a range of instinctive activities for recreational pleasure and enjoyment [22] and is such an essential foundation of children's life that the United Nations High Commissioner for Human Rights declared it as every child's right [23].Play in a variety of scenarios, such as pretend play, role play, and building with blocks, etc., serves as a catalyst for the development of a wide spectrum of children's competencies including executive functioning, cognitive aptitude, and effective communication abilities [24,25].The active participation of capable parents in play with their children, encompassing both the quantity of time spent together and the quality of the interactions, enhances the transition to more intricate and advanced development of abilities such as planning, organization, and the use of verbal instructions [24].Additionally, parents' active involvement in play with their children fosters secure and nurturing relationships [26], which provides the underpinning for further positive development into and through adolescence.Despite the central role of parent-child play in children's development, to our knowledge, no study has examined its association with children's ERB.We hypothesize that it can provide a practical avenue for enhancing the parent-child relationship, which we argue is foundational for influencing child ERBs [14]. Parenting Stress While parents' subjective happiness and play engagement are positive aspects of the home environment, parenting stress refers to the negative emotional experiences of strain, worry, anxiety, or depression that parents may experience specific to fulfilling their challenging parental responsibilities [27].Parenting stress exerts a profound influence on parental interactions and parenting methods, and consequently, the overall well-being and development of children [28].Moreover, parenting stress can significantly affect not only the parents' own health behaviors but also those of their children [29].However, the research findings regarding the connection between parenting stress and children's ERBs have yielded mixed results.For example, children whose parents experience elevated parenting stress were inclined to consume fewer vegetables, spend more time on screens, and engage in less PA [29].Conversely, other research showed that there was a positive association between parenting stress and unhealthy parental practices but not with children's unhealthy dietary habits [30].Consequently, there is a need for further research into the relationship between parenting stress and children's ERBs.We hypothesize that parenting stress will be associated with fewer positive and more negative ERBs among children. Differences between Maternal and Paternal Factors on Daughters and Sons Much of the research illuminating the role of broader parenting factors on children's development has focused on mothers [31,32].This may be due to gendered parenting practices where mothers spend substantially more time on average with children as the main caregiver and take on more household responsibilities especially related to ERBs [33].In contrast, fathers tend to engage in relatively more play and leisure activities with their children [34,35].Even so, mothers still participate more in children's play and leisure activities than fathers in several countries including Korea and the United States [36,37].In turn, these discrepancies in their parenting roles and related experiences may result in differential parenting experiences among fathers and mothers [38].While it is widely acknowledged that mothers experience higher stress levels and lower overall happiness compared to fathers [38], our comprehension of the diverse pathways connecting the distinct influences of fathers and mothers on children's ERBs, happiness, and weight status remains rudimentary.Especially in a culture where mothers are seen as the primary child caregiver, such as Korea, we expect that the mothers' influence on children's ERB will be stronger than that of fathers. Moreover, some research confirms that parents' parenting style varies depending on the child's gender [39].Parents use different socialization approaches and show different interaction patterns with boys compared to girls [40].There are also gender differences in ERBs in that boys consume more calories [41] and engage in more PA and screen time than girls [42].Also, in Korea [43], as in many Western countries [44], male adolescents have reported higher happiness than females, yet obesity rates among Korean boys are higher than girls [45].These findings suggest that both the gender of the parent and the child are important in understanding parental influences on child development [40], and more research is needed to establish the complex links in the parent-child dyad in different gender combinations.Thus, it will be important to test for similarities and differences between fathers and mothers in the role of broader parental factors for daughters' and sons' ERBs, happiness, and weight status to better inform intervention efforts to improve childhood happiness and to prevent childhood obesity. Research Hypotheses The overall aim of this research is to elucidate the role of ERBs in Korean children's health and well-being, focusing on the indicators of weight status and happiness, as well as how broader parenting factors are associated with children's ERBs.The focus is on the elementary school years from ages 7 to 9, a period mostly prior to when children are increasingly exposed to influences outside those of their parents.We propose a model depicted in Figure 1, which identifies relationships of parental happiness, parental play engagement, and parenting stress with children's ERBs, and in turn, between their ERBs and happiness and weight status across two years.Specifically, the following hypotheses based on this model will be tested using a structural equation modeling (SEM) path analysis. H1. Parental happiness and play engagement are positively related to child healthy ERBs (healthy eating and PA) and negatively to child unhealthy ERBs (unhealthy eating and screen time). H2. Parenting stress is negatively related to child healthy ERBs and positively to unhealthy ERBs. H3. Child healthy ERBs are associated positively with child happiness and negatively with child weight status cross-sectionally as well as longitudinally one and two years later. H4. Child unhealthy ERBs are associated negatively with child happiness and positively with child weight status cross-sectionally as well as longitudinally one and two years later. H5.The relationships hypothesized in H1 and H2 will be stronger for mothers compared to fathers. Moreover, applying SEM, we also conduct a multi-group analysis to explore differences between boys and girls, as specified in Figure 1. Research Hypotheses The overall aim of this research is to elucidate the role of ERBs in Korean children's health and well-being, focusing on the indicators of weight status and happiness, as well as how broader parenting factors are associated with children's ERBs.The focus is on the elementary school years from ages 7 to 9, a period mostly prior to when children are increasingly exposed to influences outside those of their parents.We propose a model depicted in Figure 1, which identifies relationships of parental happiness, parental play engagement, and parenting stress with children's ERBs, and in turn, between their ERBs and happiness and weight status across two years.Specifically, the following hypotheses based on this model will be tested using a structural equation modeling (SEM) path analysis. H1. Parental happiness and play engagement are positively related to child healthy ERBs (healthy eating and PA) and negatively to child unhealthy ERBs (unhealthy eating and screen time). H4. Child unhealthy ERBs are associated negatively with child happiness and positively with child weight status cross-sectionally as well as longitudinally one and two years later. H5.The relationships hypothesized in H1 and H2 will be stronger for mothers compared to fathers. Moreover, applying SEM, we also conduct a multi-group analysis to explore differences between boys and girls, as specified in Figure 1.Participants are ethnically highly homogenous, reflecting Korean society.Only mothers who could communicate in Korean were invited at the time of child delivery.Also, mothers and newborns with serious health issues were excluded.The number of responding families was 1598 in Wave 8 in 2016 (child age 7), 1525 in Wave 9 in 2017 (child age 8), and 1484 in Wave 10 in 2018 (child age 9).The retention rate over the seven years from Wave 1 to Wave 8 was 75.3% [47].This places this study well within the range of retention rates reported in other national longitudinal cohort studies.For example, the National Longitudinal Survey of Children and Youth in Canada retained 60% [48] whereas growing up in New Zealand retained 85% [49], in both cases being over eight years.Moreover, the current sample at Wave 8 is representative of the enrolled sample on major demographic characteristics (see Section 3.1, Table 1).For example, fathers' and mothers' education level retained identical medians and highly similar distributions over the ensuing period, other than the mothers with a 4-year college education, increasing by 3% seven years later (details available from authors). Materials and Methods Written informed consent was obtained from each adult participant at the time of recruitment.The main caregiver provided consent for their child's participation in this study.Methodological details of the PSKC have been reported elsewhere [50]. Measures Measures of parental happiness, parental play engagement, and parenting stress, and child ERBs, were administered at child age 7 and those of child happiness and child weight status are in the form of a BMI percentile at ages 7, 8, and 9. Parental Happiness Parental happiness was assessed with the Subjective Happiness Scale (SHS) [51], which uses 7-point response scales tailored to the four questions: (1) "In general, I consider myself" rated from "not a very happy person" to "a very happy person", (2) "Compared with most of my peers, I consider myself" rated from "less happy" to "happier", (3) "Some people are generally very happy.They enjoy life regardless of what is going on, getting the most out of everything.To what extent does this characterization describe you?" rated from "not at all" to "a great deal", and (4) "Some people are generally not very happy.Although they are not depressed, they never seem as happy as they might be.To what extent does this characterization describe you?" rated from "not at all" to "a great deal".Mothers and fathers completed these items separately, yielding separate scores for each.The internal consistency reliability across items was α = .90for mothers and .88 for fathers.Higher scores indicate a higher level of parental happiness.To test whether the items can support the measurement of parental happiness as a latent construct, individual items were subjected to a confirmatory factor analysis (CFA). Parental Play Engagement Parents indicated their level of play engagement by reporting the frequency of various parent-child play activities.The questionnaire was based on the Home Environment, Activities, and Cognitive Stimulation Questionnaire used in the Early Childhood Longitudinal Study Kindergarten Cohort [46].The questionnaire consisted of 10 play activities, each rated on a 4-point scale from "never" to "every day", including, for example, "I tell stories to my child", "I do arts and crafts with my child", "I talk about nature or do STEM project with my child", and "I do block and puzzles with my child".Mothers and fathers completed these items separately.The internal consistency reliability across items was α = .86for maternal play engagement and .88 for paternal play engagement.Higher scores indicate a higher level of play engagement by a parent.These items were also subject to a CFA. Parenting Stress Parenting stress was assessed with 11 questions from an instrument developed previously [52] and then revised for the PSKC during the pilot study in 2007.Questions were rated on a 5-point scale from "strongly disagree" to "strongly agree" and included, for example, "I am not sure if I could become a good parent", "I am not sure if I could raise my child well", "I feel sometimes that my child is behind his or her peers because I am not doing enough as a parent", etc. Mothers and fathers completed these items separately.The internal consistency reliability across items was α = .90for maternal stress and .89for paternal stress.Higher scores indicate a higher level of parenting stress.These items were also subject to a CFA. Child Healthy Eating The main caregiver (mostly mothers) was asked four questions addressing if the child (a) eats one serving of fruit or fruit juice per day; (b) eats vegetables, excluding Kimchi, at every meal; (c) eats lean meat/fish/bean/tofu, etc., at every meal; and (d) drinks two cups of milk/yogurt per day.The questions were developed by the PSKC and the Childhood Allergy Department at Asan Medical Center in Seoul, Korea.Responses were recorded on a 3-point scale including "very unlikely", "moderately likely", and "very likely".A total composite score was calculated so that a higher score indicates more intake of healthy food items [53]. Child Unhealthy Eating Parents were asked three questions addressing if their child (a) eats fried food more than twice a week; (b) adds salt or soy sauce to the meals often (to make the meal salty); and (c) has ice cream, cake, or soda more than twice a week.The questions were developed by the PSKC and the Childhood Allergy Department at Asan Medical Center in Seoul, Korea.Responses were recorded on a 3-point scale including "very unlikely", "moderately likely", and "very likely".A total composite score was calculated so that a higher score indicates more intake of unhealthy food items [49]. Child Physical Activities Parents were asked how many minutes of the day a child usually spends time exercising, including Taekwondo, playing with balls, swimming, and free play outside in a park, play area, or yard of the house.This was reported separately for weekdays and weekends.The weekly total physical activity in minutes was calculated. Child Screen Time Parents were asked how many minutes of the day a child usually spends time watching TV, a computer, and other screen-based devices and gaming activities with such devices.This was reported separately for weekdays and weekends.The weekly total screen time in hours was calculated. Child Happiness Child happiness was assessed using six items of a child self-report from an instrument developed previously [54] and then translated and revised by the PSKC for better understanding [55].A survey researcher conducted in-person visits to the participants' households.The children were orally asked questions about how they feel about their life and expressed their feelings using a face scale, which featured a range of four face images representing levels of agreement from "not very happy" to "very happy".Questions included, "how do you feel when you think about your family?" and "what do you think of your current school?"The children indicated their responses by pointing to the picture that closely matched their feelings.The use of this type of response scale is common when assessing children, who may have difficulty with numerical or word anchors.Many common measures of child well-being and health employ such a response scale [56,57].Moreover, systematic reviews have confirmed that psychometric data satisfactorily support the use of face response scales with children [56,58].An overall score was calculated as an observed variable, with higher scores indicating greater happiness.The internal consistency reliability across items was α = .70at age 7, .72 at age 8, and .74 at age 9.A previous study based on PSKC data using this measure confirmed its satisfactory internal consistency reliability and provides initial support for its construct validity [53,55]. Child Weight Status Child's weight status was represented by the BMI percentile score.Height and weight at ages 7, 8, and 9 were reported by the main caregiver and used to calculate the BMI percentile at each age, using the Korean Centers for Disease Control and Prevention genderand age-specific charts [59].A higher score indicates higher weight status. Statistical Analysis IBM SPSS Statistics 28 was used for descriptive statistics and Mplus for an SEM analysis to test overall model fit and hypothesized associations.After the normality of all variables was checked, the square root transformation was applied to child PA to enhance normality.Missing data were computed under maximum likelihood estimation [60]. To assess the construct validity of the measurement models for latent constructs, a CFA was conducted separately for the items of maternal happiness, paternal happiness, maternal play engagement, paternal play engagement, maternal parenting stress, and paternal parenting stress.After ensuring the adequate fit for the measurement models, the SEM path analysis was conducted, reflecting the hypothesized model (see Figure 1) using observed scores to represent the latent variables.The highest education level between the mother and father and the number of people living in the household were added as control variables in the analyses of all paths.Child happiness and the child BMI percentile assessed at ages 7, 8, and 9 were set to covary with the previous assessment of the same variable. In testing structural models, three goodness-of-fit indices were utilized to determine how well the model reproduced the characteristics of the observed data: the root mean square error of approximation (RMSEA), which should be less than 0.08 [61] and the comparative fit index (CFI) and Tucker Lewis Index (TLI), both of which should exceed 0.90 [62].This was followed by a multiple-group SEM analysis to test differences in model parameters between boys and girls. Demographics Demographics for the sample are provided in Table 1.Parents on average are well educated as 73% of fathers and 71% of mothers had at least a 2-year college education.The average age at first assessment at Wave 8 was 40.31 for fathers, 37.91 for mothers, and 7.33 for children.The large majority of the families consisted of two parents and at least one child (88%).Descriptive statistics for the parental variables are shown in Table 2, and the child variables are reported in Table 3. Correlations among all study variables are reported in Supplementary S1. Measurement Model As shown in Table 4, the factor loadings for latent variables from the CFA indicated that observed variables loaded significantly onto their respective latent variables (maternal and paternal happiness, maternal and paternal play engagement with child, and maternal and paternal parenting stress).As shown in Table 4, all measurement models showed a close fit. Path Analysis for the Total Sample Model fit was satisfactory (CFI = .98,TLI = .97,RMSEA: .03).Significant paths are detailed in Figure 2. Maternal happiness was negatively associated with child screen time, and maternal parenting stress was negatively related to child healthy eating.Maternal play engagement was positively associated with child healthy eating and PA and negatively with screen time.None of the paths from paternal variables to any of the child ERBs were significant.Out of the four child ERBs, only child screen time showed consistently positive association with child weight status and negative association with child happiness, measured at each of the three yearly waves.Child happiness was positively related from one year to the next, from age 7 to 8 and from age 8 to 9. Child weight status was likewise positively related from one year to the next. Path Analysis for the Boys and Girls Model fit was satisfactory (CFI = .97,TLI = .97,RMSEA: .03)for the multi-group SEM.Significant paths for boys and girls are shown in Figure 3. Maternal happiness was negatively related to girls' but not boys' screen time.Paternal happiness was positively related to boys' unhealthy eating.Maternal play engagement had a negative association with boys' unhealthy eating and a positive association with boys' and girls' healthy eating and girls' PA.Maternal parenting stress was negatively related to boys' and girls' healthy eating.Child screen time was negatively related to boys' and girls' happiness across three years, while child screen time was positively associated only with girls' weight status concurrently and one year later.Child happiness was positively related from one year to the next within boys as well as girls, from age 7 to 8 and from age 8 to 9. Child weight status was likewise positively related from one year to the next for both boys and girls. Path Analysis for the Boys and Girls Model fit was satisfactory (CFI = .97,TLI = .97,RMSEA: .03)for the multi-group SEM.Significant paths for boys and girls are shown in Figure 3. Maternal happiness was negatively related to girls' but not boys' screen time.Paternal happiness was positively related to boys' unhealthy eating.Maternal play engagement had a negative association with boys' unhealthy eating and a positive association with boys' and girls' healthy eating and girls' PA.Maternal parenting stress was negatively related to boys' and girls' healthy eating.Child screen time was negatively related to boys' and girls' happiness across three years, while child screen time was positively associated only with girls' weight status concurrently and one year later.Child happiness was positively related from one year to the next within boys as well as girls, from age 7 to 8 and from age 8 to 9. Child weight status was likewise positively related from one year to the next for both boys and girls. Discussion We investigated the complex family dynamics interacting with Korean children's ERBs and ultimately their happiness and weight status.Mainly, we found that it was the mothers who appear to exert more influence on children's behaviors in the form of eating habits, PA, and screen time.Regardless of child gender, having a mother who engaged in Discussion We investigated the complex family dynamics interacting with Korean children's ERBs and ultimately their happiness and weight status.Mainly, we found that it was the mothers who appear to exert more influence on children's behaviors in the form of eating habits, PA, and screen time.Regardless of child gender, having a mother who engaged in more play was associated with children eating more healthy food, being more physically active, and spending less time in front of screens.Among child ERBs, it was the child screen time that had the strongest association with child happiness and weight status.As suggested, children who spent more time with screens were linked to lower ratings of happiness and higher weight status.There were few parental and child gender interactions.We found that higher maternal happiness was only related to girls' spending less time with screens.Also, more maternal play engagement was only related to girls' higher PA.The sole paternal factor that exhibited significance was the counterintuitive association between fathers' higher happiness and more unhealthy eating in boys. Maternal Happiness and Children's Screen Time To our knowledge, this is the first study to test and report a negative association between mothers' happiness and children's screen time.Although we are not aware of any studies examining this link directly, a recent study conducted in Finland involving preschool-aged children found that parents reporting higher levels of happiness tended to have children who engaged in a greater number of healthy ERBs generally [63].As happiness is rooted in better physical, psychological, and social well-being [20,64], happier mothers may have better resources to offer options to replace children's screen time and encourage their children to engage in other activities.Alternatively, happier mothers are intrinsically motivated to practice healthy ERBs themselves, leading them to be role models for their children [65].Also, happier mothers are more likely to practice warm parenting and interact with their children positively [66], which leads children to better comply with their mothers' suggestions. Parental Play Engagement and Children's ERBs Out of parental happiness, play engagement, and parenting stress, it was the parental play engagement that had the strongest association with child ERBs.Thus, our study underscores the crucial role of parents' active participation in child play.This engagement not only provides a unique opportunity for parents to offer constructive support for their child's play but also facilitates the development of a lasting and meaningful relationship between parents and children [67].The close parent-child bond cultivated through play engagement further enhances parent-child attachment and bonding [67], fostering a positive and nurturing home environment.Additionally, the positive interactions during playtime enhance communication between parents and children, becoming a valuable tool for parents to guide their children toward constructive ERBs.In essence, shared activities, such as play, create a foundation where children are more receptive to parental guidance, promoting cooperation and responsiveness [68]. Another pathway from parental active play engagement to children's ERBs may be that play stimulates children's capacity for self-regulation.Self-regulation is an intricate construct that encompasses the capacity to manage behaviors, emotions, and cognitions in the face of environmental demands [69].Parents who utilize such positive parenting are more likely to provide clear and consistent standards and boundaries for child behavior [70].In addition, when parents consistently respond to their children's cues and needs with sensitivity, warmth, and appropriateness, children establish emotional security [70].This positive cycle fosters the development of strong self-regulation skills [71], which would enable them to engage in positive ERBs effectively. Maternal Parenting Stress and Child Healthy Food Consumption The association between mothers' increased parenting stress and children's decreased healthy eating is in line with previous findings.For instance, higher parenting stress was related to less fruit and vegetable intake by children [29].Maternal parenting stress was found to reduce mothers' motivation to stock healthy foods at home [22].Another possible mechanism is that stressed mothers are more likely to engage in emotional eating and ingest unhealthy food frequently [72], thereby potentially serving as a negative role model for their children. Child Screen Time and Child Happiness When we consider all four ERBs simultaneously in our study, child screen time emerges as the standout ERB with a consistent connection to both happiness and weight status lasting for at least two years.Previous research has consistently reported that high screen time is associated with various negative well-being and health outcomes [73].For example, in a recent study in Spain with older children, increased screen time was related to poorer psychological well-being and greater psychological distress [74].Also, adolescents and young adults with longer screen time reported lower psychological well-being [75], less happiness [76], and more stress [77].Here, we showed that these negative side effects of increased screen time start already at the early elementary school age and are manifested in a different culture. Several mechanisms could explain the negative link between screen time and childhood happiness.Excessive screen time can lead to social isolation as individuals may spend less time interacting with others in person [78].Prolonged screen time can reduce face-to-fact interactions with family and friends, which are crucial for emotional well-being and support [79].Loneliness and social isolation are known risk factors for psychological distress [80].Also, content on screens can vary widely, and exposure to distressing or violent content can increase feelings of fear, anxiety, or distress, further impacting a child's emotional well-being [79,81].These factors collectively underscore the potential for screen time to adversely affect a child's happiness. Child Screen Time and Child Weight Status With the development of media-related technologies, a considerable amount of time is now being spent in front of a screen.Children who are exposed to screens for long periods may exacerbate the risk of overweight BMI and obesity due to a lack of PA and the tendency to ingest more high-calorie food [73].Longer screen time is often accompanied by lower overall PA, being replaced with increased sedentary behavior, which would lead to lower energy expenditure and increased fat deposits and BMI [82].Moreover, eating in front of a screen would delay hunger cues and lead to excessive food intake [83].Therefore, longer screen time may be one of the most important risk factors for overweight BMI and obesity [14]. Parent's and Child's Gender We observed a few gender associations between parents and children in that maternal play engagement was negatively linked with boys' unhealthy food consumption, maternal play engagement was positively related to girls' PA, maternal happiness was negatively associated with girls' screen time, and paternal happiness was positively related to boys' unhealthy food intake.All in all, it was the mothers that had a positive influence on a child's health and development.This may be attributed to the gender differences in traditional parental roles.Mothers predominantly assume the proactive role in child rearing, planning, and household management [84,85].In contrast, fathers, often lacking expertise in housework or childcare, tend to be less engaged [86].As mothers take on more responsibility for feeding the child, especially in a healthy and nutritious way [86,87], fathers may resort to easy but less nutritious meals and snacks when they are in charge, possibly due to their limited cooking skills or lower level of attentiveness. According to a recent study in Korea that aimed to explore the division of childcare responsibilities among parents, mothers assumed 70.9% of the childcare duties during weekdays regardless of employment status [88].During weekends, fathers' involvement increases, but still mothers bear more responsibility at 57.8% [88].Nevertheless, a shift is noticeable as many fathers are now demonstrating willingness to actively engage in family caregiving.They are increasingly taking parental leave from work and becoming more involved in child rearing [89].These findings shed light on the existing gender disparities in parenting roles and emphasize the need for further research and societal efforts to promote gender equality in childcare responsibilities. Limitations Despite several novel findings of this study, the results should be considered in light of limitations.First, this is an observational study from which causation cannot be determined.Second, the sample is homogeneous Korean married parents and their children, who are developing in the broad normal range.Moreover, some of the assessments reflect Korean culture, such as the type of play addressed when assessing parental play engagement and marker foods targeted when measuring a child's intake.These findings need to be replicated with other samples while still needing to reflect cultural competencies.Third, child ERBs were reported mostly by mothers, which could be biased toward presenting a more positive picture than reality.Employing more objective assessment tools such as tracking devices or ecological momentary assessment via smartphone applications could enhance accuracy and reliability.Fourth, our understanding of screen behaviors is limited to the amount of time spent in front of a screen, as reported by a parent.Therefore, we are unable to examine the content or context of the screen time that may moderate child ERBs.For example, future research should investigate whether certain types of media or content exposure are linked to more harmful outcomes.Finally, acknowledging the potential impact of parental factors on a child's self-report adds a layer of complexity to the interpretation of happiness measurements in children.Consequently, further research is necessary to evaluate the validity of children's self-reported happiness.Notwithstanding these limitations, this is the first study that illuminates the potential mechanisms of how various parental factors may influence Korean children's ERBs concurrently and ultimately their happiness and weight status over time. Conclusions Healthy and well-adjusted child development is an overriding goal of most parents as well as society [90].In our pursuit of identifying the optimal approach to enhance children's happiness and promote healthy weight outcomes, we found that broader parenting factors can serve as protective factors for both childhood happiness and weight status among Korean 7-to-9-year-olds.This association is linked to a decrease in children's screen time.Particularly, we could argue that when parents actively play with them, expressing attention and care, children may unwittingly come to utilize self-regulation to practice healthier behaviors, but further studies are required to understand the possible mechanisms.Furthermore, the unexpected finding regarding the paternal role in boys' unhealthy food intake underscores the need for societal support to encourage greater paternal involvement in their children's lives.This could involve initiatives promoting a work-life balance and public health campaigns offering practical suggestions on how fathers can engage more actively with their children.Finally, in today's digital age, where children have access to a wide range of media for both entertainment and education, there is an urgent need for a personalized approach to screen content and duration.In this sense, it becomes crucial to advocate for alternative leisure activities within families and equip parents with resources to facilitate and encourage these options. 2 . 1 . Data Source and Participants Data are from Wave 8 through 10 (child ages 7-9) of the publicly available data set from the Panel Study on Korean Children (PSKC) conducted by the Korean Institute of Child Care and Education [46].The PSKC is a prospective longitudinal survey of a representative national cohort sample of children born between April and July 2008 and their parents.It was designed to collect comprehensive data on the characteristics of children, parents, families, and local communities as well as the effectiveness of childcare policies in Korea.The first wave of PSKC enrolling 2150 families was conducted in 2008, and follow-up surveys have been performed annually and are still ongoing. Figure 2 . Figure 2. Path analysis results for the total sample.Latent variables are represented with ovals and observed variables by rectangles.All parental variables and child healthy eating, unhealthy eating, physical activities, and screen time were assessed at child age 7.Only significant standardized path coefficients are reported.Positive relations are represented by bold lines; negative relations are represented by dotted lines.All paths were controlled for mothers' and fathers' education level, and household composition.BMI %tile = body mass index percentile.* p < .05. Figure 2 . Figure 2. Path analysis results for the total sample.Latent variables are represented with ovals and observed variables by rectangles.All parental variables and child healthy eating, unhealthy eating, physical activities, and screen time were assessed at child age 7.Only significant standardized path coefficients are reported.Positive relations are represented by bold lines; negative relations are represented by dotted lines.All paths were controlled for mothers' and fathers' education level, and household composition.BMI %tile = body mass index percentile.* p < .05. Figure 3 . Figure 3. Path analysis results for the multi-group analysis.Latent variables are represented with ovals and observed variables by rectangles.All parental variables and child healthy eating, unhealthy eating, physical activities, and screen time were assessed at child age 7.Only significant standardized path coefficients are reported.Positive relations are represented by bold lines; negative relations are represented by dotted lines.All paths were controlled for mothers' and fathers' education level, and household composition.BMI %tile = body mass index percentile; B = Boys; G = Girls.[1] B: .13*; [2] B: .12*; G: .11*; [3] B: −.10 *; * p < .05. Figure 3 . Figure 3. Path analysis results for the multi-group analysis.Latent variables are represented with ovals and observed variables by rectangles.All parental variables and child healthy eating, unhealthy eating, physical activities, and screen time were assessed at child age 7.Only significant standardized path coefficients are reported.Positive relations are represented by bold lines; negative relations are represented by dotted lines.All paths were controlled for mothers' and fathers' education level, and household composition.BMI %tile = body mass index percentile; B = Boys; G = Girls.[1] B: .13*; [2] B: .12*; G: .11*; [3] B: −.10 *; * p < .05. Table 2 . Descriptive Statistics for Parental Variables. Table 3 . Descriptive Statistics for Child Variables. Table 4 . Factor Loadings and Model Fit for Indicators of Latent Variable. Note.RMSEA = Root Mean Square Error of Approximation, CFI = Comparative Fit Index, TLI = Tucker Lewis Index.
v3-fos-license
2021-09-28T01:09:21.812Z
2021-07-11T00:00:00.000
237802357
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/en14144188", "pdf_hash": "7a820b137935664814dd1e1054fec3d89bfe44e8", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1296", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "c413b2a4bb872d1fb7ceb68b90b33842ccbc6755", "year": 2021 }
pes2o/s2orc
Study of the Intelligent Control and Modes of the Arctic-Adopted Wind–Diesel Hybrid System For energy supply in the Arctic regions, hybrid systems should be designed and equipped to ensure a high level of renewable energy penetration. Energy systems located in remote Arctic areas may experience many peculiar challenges, for example, due to the limited transport options throughout the year and the lack of qualified on-site maintenance specialists. Reliable operation of such systems in harsh climatic conditions requires not only a standard control system but also an advanced system based on predictions concerning weather, wind, and ice accretion on the blades. To satisfy these requirements, the current work presents an advanced intelligent automatic control system. In the developed control system, the transformation, control, and distribution of energy are based on dynamic power redistribution, dynamic control of dump loads, and a bi-directional current transducer. The article shows the architecture of the advanced control system, presents the results of field studies under the standard control approach, and models the performance of the system under different operating modes. Additionally, the effect of using turbine control to reduce the effects of icing is examined. It is shown that the advanced control approach can reduce fuel consumption in field tests by 22%. Moreover, the proposed turbine control scheme has the potential to reduce icing effects by 2% to 5%. Introduction Most of the Russian and Finnish territories are located in a cold climate. Almost the entire northern part of these countries, as well as Central Siberia and Yakutia in Russia, fall within the cold polar zone of the Arctic, a zone of extremely low temperatures. In these areas, the duration of winter significantly exceeds that of summer with temperatures dipping close to −50 • C. Conditions here are unique, yet approximately 2.5 million people live in these areas. This is more than the total number of people residing in the Arctic areas of the seven other Arctic nations, all of which have less severe climatic conditions [1]. A significant part of the Arctic territory belongs to the decentralized energy supply zone. This zone is characterized by weak infrastructure associated with its remoteness from regional centers, and electricity is mainly produced by diesel power plants operating on expensive imported fuel. In the Russian areas of this zone, there are approximately 900 diesel power plants in operation, which produce an energy output of about 3.0 billion kWh annually [1]. The main challenges of supplying power to isolated consumers are the high logistical costs associated with the delivery of fuel and equipment for diesel power plants, the limited transport infrastructure, and, consequently, the high cost of fuel. Additionally, the operating costs of diesel power plants and specific fuel consumption are • Limitations in equipment and fuel delivery due to the short periods in which transportation is possible; • The need for quick installation and construction without the use of heavy lifting and transport equipment in the absence of roads; • The possibility for maintenance without the involvement of qualified specialists. Additionally, the hybrid system must have a high degree of automation, including adaptive algorithms and intelligent control, and a remote monitoring and diagnostic system to optimize expensive diesel fuel usage. Given the above, this study develops an existing field operating control system to improve its autonomous operation in Arctic conditions and maximize diesel fuel savings. It is shown that carbon-neutral technologies can be highly effective in Arctic zones subject to advanced control and reliable and safe operation. The study aims to identify how the system control approach affects system performance. Two control schemes are studied and compared with the use of a diesel engine only: 1. load following mode, and 2. cycle charge with short-term forecasting including icing effects. Additionally, a new wind turbine control method is proposed to decrease the effects of icing. The key novelties of the study are: • The presentation of advanced control that includes climatic forecasting (wind speed, icing, etc.); • The presentation of a new control approach to limit the effects of wind turbine icing. The article is structured as follows. First, the hybrid system's control methods are explained. Next, the icing modeling and the novel pitch control approach are presented. In the Results section, before the discussion and conclusions, the effects of the different control methods are examined. This analysis is followed by the demonstration of the potential of a combined pitch and tip-to-speed ratio control approach to reduce the effects of icing. Materials and Methods This section presents the general layout of the proposed intelligent automatic control system (IACS) (Figure 1). Subsequently, the two parts of IACS, namely, standard control and advanced control, are explained in detail. This section also presents a novel turbine control scheme for reducing the effects of icing. The role of the first two units is to allow the system to achieve high renewable energy penetration. In contrast, the last two units compose an additional advanced control system that allows operation in harsh climatic conditions. Power balance control and equipment diagnostics units (standard control) IACS is the software part of the "conversion, control and energy distribution" module of hybrid energy systems. This module provides the possibility of maximizing energy production from renewable energy sources due to the dynamic redistribution of power between the elements of the hybrid system and, as a result, minimizing fuel consumption To solve the challenges related to the hybrid energy system's fuel economy, a control methodology has been proposed by Elistratov et al. [21]. Their study argues that an intelligent automatic control system must [21]: 1. In real time, maximize the energy output of the wind power plant and fuel economy while covering the required load; 2. Provide remote monitoring of the hybrid system's parameters and operating modes; 3. Provide intelligent dispatching of the equipment, ensuring the maximum degree of autonomy; 4. Monitor the condition of the equipment, analyze the statistics of wind-diesel operating modes, and provide forecasting of the wind regime; 5. Ensure scheduling of equipment operation, maintenance, risk assessment, and emergency prevention interventions; 6. Duplicate the main controller of the system and the control and measuring systems; in an emergency, the possibility of manual control should be provided; 7. Be adaptable and supply energy around the clock, including in the event of a failure of the generating equipment. Structurally, IACS consists of the following units, which are presented in Figure 1 and explained in the following two sub-sections: • An equipment diagnostics unit (supervisory control and data acquisition of each system element); • A power balance control unit that distributes energy between the system's generating equipment; • A forecasting unit; • An icing prediction unit. The role of the first two units is to allow the system to achieve high renewable energy penetration. In contrast, the last two units compose an additional advanced control system that allows operation in harsh climatic conditions. Power Balance Control and Equipment Diagnostics Units (Standard Control) IACS is the software part of the "conversion, control and energy distribution" module of hybrid energy systems. This module provides the possibility of maximizing energy production from renewable energy sources due to the dynamic redistribution of power between the elements of the hybrid system and, as a result, minimizing fuel consumption with the option to disconnect the diesel generator entirely when renewable energy sources have sufficient capacity. Figure 2 presents the hardware part of this module [22]. The hardware consists of two power devices for dynamic power balance control (i.e., the bi-directional current transducer and controlled dump load) and the main controller, both of which perform high-level control. The energy sources of the autonomous hybrid system are divided into two categories: leading and following sources. Leading sources can be either the diesel component (as the main source, defining a supply voltage) or the bi-directional current transducer with connected batteries (in autonomous inverter mode) while the following sources adapt to the main source's voltage and generate power to the grid (e.g., the wind component). If the capacity of the diesel and wind components, averaged over a certain period, exceeds the total power consumption, then to achieve the maximum use of renewable energy and thus to maximize diesel fuel economization, it is possible to turn off all diesel generators. In this case, the leading source becomes the bi-directional current transducer, which goes into standalone inverter mode and generates a network voltage. Leading sources can be either the diesel component (as the main source, defining a supply voltage) or the bi-directional current transducer with connected batteries (in autonomous inverter mode) while the following sources adapt to the main source's voltage and generate power to the grid (e.g., the wind component). If the capacity of the diesel and wind components, averaged over a certain period, exceeds the total power consumption, then to achieve the maximum use of renewable energy and thus to maximize diesel fuel economization, it is possible to turn off all diesel generators. In this case, the leading source becomes the bi-directional current transducer, which goes into standalone inverter mode and generates a network voltage. Forecasting and icing prediction units (advanced control) The intermittent and fluctuating nature of wind energy production increases the importance of short-term weather forecasting in energy systems. With renewables being introduced into isolated power grids, the inherent uncertainty associated with weather forecasts places significant strain on existing off-grid power systems. These challenges lead to power quality and stability issues and affect both power grid management and balancing. Moreover, efficient system control requires accurate estimations of both energy supply and demand, which further highlights the importance of weather forecasting. In general, energy demand is more stable than renewable energy production, which is directly influenced by local weather systems. It is, however, important to acknowledge that unexpected peaks in demand can occur, for example, due to extreme weather. Poor weather predictions can lead to various problems in off-grid systems with detrimental economic and environmental effects. These challenges include the possibility of power shortages, the need for additional spinning or non-spinning reserves, and the increased use of diesel fuel. Another possible scenario is that the system can produce a large oversupply of energy, whereby diesel fuel will be burned needlessly. These considerations fully justify the need for high-quality weather predictions covering 10 to 60-min time spans to ensure efficient grid supply and demand balancing [23]. Forecasting and Icing Prediction Units (Advanced Control) The intermittent and fluctuating nature of wind energy production increases the importance of short-term weather forecasting in energy systems. With renewables being introduced into isolated power grids, the inherent uncertainty associated with weather forecasts places significant strain on existing off-grid power systems. These challenges lead to power quality and stability issues and affect both power grid management and balancing. Moreover, efficient system control requires accurate estimations of both energy supply and demand, which further highlights the importance of weather forecasting. In general, energy demand is more stable than renewable energy production, which is directly influenced by local weather systems. It is, however, important to acknowledge that unexpected peaks in demand can occur, for example, due to extreme weather. Poor weather predictions can lead to various problems in off-grid systems with detrimental economic and environmental effects. These challenges include the possibility of power shortages, the need for additional spinning or non-spinning reserves, and the increased use of diesel fuel. Another possible scenario is that the system can produce a large oversupply of energy, whereby diesel fuel will be burned needlessly. These considerations fully justify the need for high-quality weather predictions covering 10 to 60-min time spans to ensure efficient grid supply and demand balancing [23]. Among traditional short-term forecasting methods such as the Auto-Regressive Integrated Moving Average (ARIMA), many modern processes use a form of deep learning known as recurrent neural networks (RNNs). A popular type of RNN, which is applied here, is the Long Short-Term Memory (LSTM) network. The models predicting wind characteristics and power output considered in this article are: • Integrated autoregressive models (ARIMA) [ The ARIMA models are the most commonly used class of models for stationary signal forecasting (or a signal that can be made stationary). These models support random walk, seasonal trend, non-seasonal exponential smoothing, and autoregressive models. Lags of the stationarized series in the forecasting equation are called "autoregressive" terms, while "moving average" terms describe the lags of the forecast errors. A time series, which needs to be differenced to be made stationary, is said to be an "integrated" version of a stationary series. Random-walk and random-trend models, autoregressive models, and exponential smoothing models are all considered special cases of ARIMA models. The learning and encoding of signal temporal features are enabled by the RNN. This is an ideal approach to forecast signals, which are reasonably predictable based on past events. LSTM networks are recurrent networks that can overcome some of the historic challenges related to the training of recurrent networks, such as the vanishing gradients problem. This study will not go into the detail of evaluating and comparing forecasting models and will adopt the LSTM model due to its universally accepted ability to predict wind speed and load and perform predictive diagnostics of equipment condition. Wind power plant output forecasts are based both on weather conditions and the power curves of the turbines. Moreover, at least one numerical weather forecast model should be integrated into the model being developed. These weather models will help to predict global weather patterns and their effects on local conditions. The numerical weather model (NWM) used to consider information other than data from local station observations is the NEMS4 model [31]. This model is provided free of charge by MeteoBlue (meteoblue AG, Basel, Switzerland) for a given date range and station. The data are provided in raw format. The weather data prediction unit is connected to the icing prediction unit. When operating a wind turbine in a cold climate, additional power losses occur due to several types of icing: heavy frosting of the blades (in temperatures below −25 • C), sedimentary (cloudy) icing, and atmospheric icing (Figure 3). acteristics and power output considered in this article are: The ARIMA models are the most commonly used class of models for stationary signal forecasting (or a signal that can be made stationary). These models support random walk, seasonal trend, non-seasonal exponential smoothing, and autoregressive models. Lags of the stationarized series in the forecasting equation are called "autoregressive" terms, while "moving average" terms describe the lags of the forecast errors. A time series, which needs to be differenced to be made stationary, is said to be an "integrated" version of a stationary series. Random-walk and random-trend models, autoregressive models, and exponential smoothing models are all considered special cases of ARIMA models. The learning and encoding of signal temporal features are enabled by the RNN. This is an ideal approach to forecast signals, which are reasonably predictable based on past events. LSTM networks are recurrent networks that can overcome some of the historic challenges related to the training of recurrent networks, such as the vanishing gradients problem. This study will not go into the detail of evaluating and comparing forecasting models and will adopt the LSTM model due to its universally accepted ability to predict wind speed and load and perform predictive diagnostics of equipment condition. Wind power plant output forecasts are based both on weather conditions and the power curves of the turbines. Moreover, at least one numerical weather forecast model should be integrated into the model being developed. These weather models will help to predict global weather patterns and their effects on local conditions. The numerical weather model (NWM) used to consider information other than data from local station observations is the NEMS4 model [31]. This model is provided free of charge by MeteoBlue (meteoblue AG, Basel, Switzerland) for a given date range and station. The data are provided in raw format. The weather data prediction unit is connected to the icing prediction unit. When operating a wind turbine in a cold climate, additional power losses occur due to several types of icing: heavy frosting of the blades (in temperatures below −25 °C), sedimentary (cloudy) icing, and atmospheric icing (Figure 3). Modern wind turbines include proven technical solutions to enable their operation in temperatures as low as −35 °C [32]. However, it is not only temperature that is important but also the duration of icing. For areas with long winter seasons, it is important Modern wind turbines include proven technical solutions to enable their operation in temperatures as low as −35 • C [32]. However, it is not only temperature that is important but also the duration of icing. For areas with long winter seasons, it is important to strengthen the control system by adding an icing prediction unit. This will make it possible to effectively use the existing systems to protect against ice, which can grow intensively on the surface of the blades. According to the Makkonen theory [33], the functionality of the icing intensity indicator depends on the predicted weather parameters. Based on these calculations, a decision is made to turn on the protection system. The intensity of icing is determined by the following equation: where α is the collision efficiency factor; β is the coefficient of sticking efficiency; γ is the coefficient of efficiency of growth (accretion); LWC is the liquid water content in the air (mass particle concentration), kg/m 3 ; v is the speed of incoming airflow (particle velocity), m/s; A is the cross-sectional area of the wind turbine blade (relative to the direction of the airflow velocity vector), m 2 . LWC values and alpha coefficients depend on the weather parameters (pressure, temperature, humidity, specific water content in the environment, etc.). In this article, the prediction of the onset of atmospheric icing is based on the occurrence of the conditions presented in Table 1. Table 1. Conditions for atmospheric icing (adopted from [34]). Parameter Condition Wind Speed >3 m/s Temperature −4 • C > T > −20 • C Relative Humidity >95% To protect the blades from ice, special anti-icing and de-icing systems are used, as described in detail in [32,35]. In the icing prediction unit, the input data are acquired from meteorological instruments (weather data), wind measuring systems (wind speed, data correlation for the "heated-unheated anemometer" system), and directly from the wind turbine (power). When the output power from the wind turbine drops and there are conditions for atmospheric icing (Table 1), the system produces a signal to turn on the anti-icing system. All icing protection systems are divided into two types: active and passive. Active systems require additional power from their own system (these include all anti-icing systems installed inside or outside the blade). Passive systems do not incur additional costs when operating the wind turbine (de-icing systems, for example, painting the blades in black). The pitch-control system for wind turbines with a capacity of more than 1 MW is categorized as a passive system since it is preinstalled and does not incur substantial additional costs to be operated. However, for wind turbines of a maximum 1 MW capacity, a feasibility study concerning the application of the regulation system is required. In the case of a pitch-control in a lower capacity turbine, it is necessary to compare cost and effectiveness. The possible effect can be estimated based on the power output increase. Figure 4 shows how the turbine power coefficient changes due to icing based on the airfoil data reported by Homola et al. [36]. The calculations are based on Wilson's equation [37] and involve different angles of attack and tip-to-speed ratios. It is noticeable that, when icing occurs, the changes in the ratio of the lift and drag forces influence performance but so do changes in the tip-to-speed ratio. In the development of a pitch-control-based approach, optimal drag-to-lift ratios from several references including airfoil performance data for both clean and icing conditions [32,33,[35][36][37][38][39] are used. The data used include several wind speeds, as seen in Table 2. Wilson's equation [37] with different angles of attack and tip-to-speed ratios is used to predict the maximum power coefficient of a wind turbine. The results under four conditions are presented: 1. clean turbine, 2. turbine under icing conditions, 3. turbine under icing conditions with pitch control, and 4. turbine under icing conditions with combined pitch and tip-to-speed ratio control. The results for different wind speeds are summarized in Table 2. This analysis reveals that the pitch control can overcome some of the icing effects, but the combined pitch and tip-to-speed ratio control has even higher loss reduction potential. Therefore, the potential of the combined pitch and tip-to-speed ratio control is examined more closely in the following section. icing based on the drag to lift data (adopted from [34]). In the development of a pitch-control-based approach, optimal drag-to-lift ratios from several references including airfoil performance data for both clean and icing conditions [32,33,[35][36][37][38][39] are used. The data used include several wind speeds, as seen in Table 2. Wilson's equation [37] with different angles of attack and tip-to-speed ratios is used to predict the maximum power coefficient of a wind turbine. The results under four conditions are presented: 1. clean turbine, 2. turbine under icing conditions, 3. turbine under icing conditions with pitch control, and 4. turbine under icing conditions with combined pitch and tip-to-speed ratio control. The results for different wind speeds are summarized in Table 2. This analysis reveals that the pitch control can overcome some of the icing effects, but the combined pitch and tip-to-speed ratio control has even higher loss reduction potential. Therefore, the potential of the combined pitch and tip-to-speed ratio control is examined more closely in the following section. Table 2. Results summary of the performance analysis for a wind turbine with and without icing (adopted from [34]). The results are presented for a clean turbine (Cpclean), a turbine under icing conditions (Cpicing), a turbine under icing conditions with pitch control (Cpα), and a turbine under icing conditions with pitch and tip-to-speed ratio control (Cpαλ). Results This section presents the results of two hybrid system control modes and compares their performance with that of the diesel-only system. The data are presented for a conventional load following mode based on field data and for a simulated IACS operating mode. Additionally, the potential of the combined pitch and tip-to-speed ratio control approach is demonstrated, and the turbine icing modeling approach is verified. icing based on the drag to lift data (adopted from [34]). Table 2. Results summary of the performance analysis for a wind turbine with and without icing (adopted from [34]). The results are presented for a clean turbine (Cp clean ), a turbine under icing conditions (Cp icing ), a turbine under icing conditions with pitch control (Cp α ), and a turbine under icing conditions with pitch and tip-to-speed ratio control (Cp αλ ). Results This section presents the results of two hybrid system control modes and compares their performance with that of the diesel-only system. The data are presented for a conventional load following mode based on field data and for a simulated IACS operating mode. Additionally, the potential of the combined pitch and tip-to-speed ratio control approach is demonstrated, and the turbine icing modeling approach is verified. Hybrid System Modes A comparative analysis of the operating modes of an existing Arctic wind-diesel hybrid system was carried out to assess the practical benefits of the implementation of intelligent control algorithms. The correlation between the structure of the considered equipment and high renewable energy penetration is illustrated in Figure 5. A comparative analysis of the operating modes of an existing Arctic wind-diesel hybrid system was carried out to assess the practical benefits of the implementation of intelligent control algorithms. The correlation between the structure of the considered equipment and high renewable energy penetration is illustrated in Figure 5. The model used is compiled in Python (Python Software Foundation, Gemini Dr., Beaverton, USA), on the core of a real wind-diesel power plant (WDPP) control system. The system has the following properties: a wind turbine of 100 kW, a full capacity converter, a diesel generator set of 110 kW, a battery energy storage system with a capacity of 200 kWh, a dump load with a capacity of 70 kW, and a real load graph (max 55 kW). The limitations of the model are as follows. Firstly, loading the initial data does not take into account the delay in their download. Secondly, the authors do not investigate the influence of the model operation speed on the signals of real facilities. (1) Load following mode: In this mode, the diesel generator outputs electricity in accordance with the load (leading mode). The surplus electricity is first used to charge the battery and then to heat the water using the dump load. Disconnection of the diesel component is possible in the case of a fully charged battery and a prolonged excess of wind turbine output over the load. The diesel component is switched on when the battery voltage reaches the specified minimum. Thus, the battery works in deep cycles during periods of high winds. The dump load, together with the battery, contributes to the regulation of the network voltage to achieve its stable operation and acts as a buffer for load hesitation. Surplus energy is utilized in the form of useful heat for heating needs. Figure 6 shows hourly balances of power in supervisory control and data acquisition (SCADA)-based monitoring data over five days. From the The model used is compiled in Python (Python Software Foundation, Gemini Dr., Beaverton, OR, USA), on the core of a real wind-diesel power plant (WDPP) control system. The system has the following properties: a wind turbine of 100 kW, a full capacity converter, a diesel generator set of 110 kW, a battery energy storage system with a capacity of 200 kWh, a dump load with a capacity of 70 kW, and a real load graph (max 55 kW). The limitations of the model are as follows. Firstly, loading the initial data does not take into account the delay in their download. Secondly, the authors do not investigate the influence of the model operation speed on the signals of real facilities. (1) Load following mode: In this mode, the diesel generator outputs electricity in accordance with the load (leading mode). The surplus electricity is first used to charge the battery and then to heat the water using the dump load. Disconnection of the diesel component is possible in the case of a fully charged battery and a prolonged excess of wind turbine output over the load. The diesel component is switched on when the battery voltage reaches the specified minimum. Thus, the battery works in deep cycles during periods of high winds. The dump load, together with the battery, contributes to the regulation of the network voltage to achieve its stable operation and acts as a buffer for load hesitation. Surplus energy is utilized in the form of useful heat for heating needs. Figure 6 shows hourly balances of power in supervisory control and data acquisition (SCADA)-based monitoring data over five days. From the figure, it is noticeable that for the majority of the time when the wind turbine is operating, the power balance significantly exceeds the load. (2) Cycle charge with short-term forecasting mode: In this mode, the diesel generator works as an additional source of energy to cover power shortages. In the case of favorable wind turbine output forecasts, it is switched off. At the same time, the battery is used more efficiently and the size of the buffer capacity of the dump load is reduced. The changes in dump load performance are visible when comparing Figures 6 and 7. From Figure 7, it can be seen that the diesel generator is repeatedly replaced by the battery discharge. Energies 2021, 14, x FOR PEER REVIEW 10 of 15 figure, it is noticeable that for the majority of the time when the wind turbine is operating, the power balance significantly exceeds the load. Figure 6. Power balance under the load following mode (SCADA measurements). (2) Cycle charge with short-term forecasting mode: In this mode, the diesel generator works as an additional source of energy to cover power shortages. In the case of favorable wind turbine output forecasts, it is switched off. At the same time, the battery is used more efficiently and the size of the buffer capacity of the dump load is reduced. The changes in dump load performance are visible when comparing Figures 6 and 7. From Figure 7, it can be seen that the diesel generator is repeatedly replaced by the battery discharge. A more detailed comparison of the results is summarized in Table 3. The integration of the wind turbine and battery into the system during the analysis period facilitated fuel savings of 38%; however, to ensure the stable operation of the power system, a significant figure, it is noticeable that for the majority of the time when the wind turbine is operating, the power balance significantly exceeds the load. (2) Cycle charge with short-term forecasting mode: In this mode, the diesel generator works as an additional source of energy to cover power shortages. In the case of favorable wind turbine output forecasts, it is switched off. At the same time, the battery is used more efficiently and the size of the buffer capacity of the dump load is reduced. The changes in dump load performance are visible when comparing Figures 6 and 7. From Figure 7, it can be seen that the diesel generator is repeatedly replaced by the battery discharge. A more detailed comparison of the results is summarized in Table 3. The integration of the wind turbine and battery into the system during the analysis period facilitated fuel savings of 38%; however, to ensure the stable operation of the power system, a significant A more detailed comparison of the results is summarized in Table 3. The integration of the wind turbine and battery into the system during the analysis period facilitated fuel savings of 38%; however, to ensure the stable operation of the power system, a significant portion of the electricity generated by the wind turbine (48%) was distributed to the secondary regulatory load. With the wind turbine production forecasts and battery operation functioning in a cyclic mode, the share of wind turbine energy going to the secondary control load decreased to 38% and renewable energy penetration increased to 60%. At the same time, the number of battery cycles increased 2.2 times, up to two cycles of 80% charge/discharge per day (660 cycles per year). In their research employing similar system components (diesel generator, wind turbine, and battery), Elkadeem et al. [5] were able to reduce diesel fuel consumption by 85% (compared with a diesel-only system); however, their study had a significantly larger relative share of wind power capacity than the current study (more than two times the diesel generators' power). When compared with the load following mode, doubling the share of wind turbines seems to also roughly double the fuel savings. In relative terms, Li et al. [6] employed approximately similar diesel generator and wind power capacities but had significantly higher battery capacity (roughly three times higher). Their study reports a fuel saving of 74%. This leads to the conclusion that the proposed cycle charge with short-term forecasting mode can offer fuel-saving benefits by adding significantly more wind power or battery capacity without adding any actual new capacity. The Effect of Pitch and Tip-to-Speed Ratio Control To verify the chosen wind turbine icing modeling approach, the performance values of Table 2 were used to build power curves for clean and icing cases. The results of the modeling were then compared with the predictions from the Finnish Icing Atlas, which is based on the Finnish Wind Atlas [39] and ice aggregation modeling according to standard ISO 12394:2001. Since the figures in the Finnish Icing Atlas are reasonably sensitive to location, an area with a radius of 30 km was used to determine the maximum production loss values in each area for comparison with the exact location values presented in parentheses to illustrate local variations. The comparison reveals that the model presented in this work generally overestimates losses ( Table 4). The magnitude of the predicted losses is, however, still similar to those achieved. This indicates that the proposed model can produce reasonable estimates for icing effects even though it is based on single airfoil data rather than data for full turbine blade shapes. To implement the developed models in the energy system modeling tool, three curve fits were built based on the Madetkoski data from Finland. Figure 8 presents the power curves for a clean case, icing case, and pitch and tip-to-speed (optimized) control case and illustrates how the applied control approach can affect turbine performance. The total positive effect of optimized control on turbine power below the nominal operating point is between 2% to 5%. For wind turbines of medium and high capacity in Arctic zones, the optimized control system is advisable since the total cost of its installation is less than the total economic savings it can achieve. However, for wind turbines with a capacity of less than 300 kW, the installation of such a system must be confirmed by the relevant technical and economic analyses. than 300 kW, the installation of such a system must be confirmed by the relevant technical and economic analyses. Conclusions 1. The article presents the architecture of an advanced intelligent automatic control system for a wind-diesel hybrid system with high renewable energy penetration and describes the main modes of its operation. 2. The integration of the wind turbine and battery into the hybrid system enabled fuel savings of 38%, which were achieved by replacing the power generated by the diesel engine with wide turbine and battery power. In the load following mode, it was possible to disconnect the diesel generator when the battery was fully charged and wind turbine production was high. The dump load and the battery were used to regulate the network voltage. The excess electricity produced was used for heating. 3. With the addition of wind speed forecasting (LSTM model) and the cyclic charge mode, the share of wind turbine energy going to the secondary dump load decreased to 38%, and diesel fuel savings increased to 60%. Overall, the fuel savings correspond to the effects of significant additions of either wind turbine or battery capacities. 1. The article presents the architecture of an advanced intelligent automatic control system for a wind-diesel hybrid system with high renewable energy penetration and describes the main modes of its operation. 2. The integration of the wind turbine and battery into the hybrid system enabled fuel savings of 38%, which were achieved by replacing the power generated by the diesel engine with wide turbine and battery power. In the load following mode, it was possible to disconnect the diesel generator when the battery was fully charged and wind turbine production was high. The dump load and the battery were used to regulate the network voltage. The excess electricity produced was used for heating. 3. With the addition of wind speed forecasting (LSTM model) and the cyclic charge mode, the share of wind turbine energy going to the secondary dump load decreased to 38%, and diesel fuel savings increased to 60%. Overall, the fuel savings correspond to the effects of significant additions of either wind turbine or battery capacities. 4. The net savings of using pitch and tip-to-speed ratio control exceed the cost of installing this system for medium-and high-capacity wind turbines. The use of the icing prediction unit in conjunction with weather forecasting and the turbine control system provides a more reliable operation of the wind turbine in harsh climatic conditions. It is estimated that these systems can reduce the operational expenditure (OPEX) by approximately 20%. It is worth noting that the article does not examine the diagnostic unit. The current trend is the "complication" of data analytics towards deep machine learning and predictive diagnostics to prevent accidents through the application of accumulated experience and the analysis of large amounts of data. The article considers the example of the onset of atmospheric icing; however, in the icing prediction block, it is necessary to calculate icing intensity using formula (1). In future research, these indicators will be studied in greater detail. Author Contributions: The article is the result of the efforts of two working groups, one in St. Petersburg (SPbPU, Russia) and one in Lappeenranta (LUT University, Finland). Conceptualization, supervision, and project administration were overseen by V.E. and T.T.-S.; methodology, IACSarchitecture, and visualization were overseen by R.D.; modeling and validation hybrid system modes were overseen by M.K.; modeling and validation of the pitch control system were overseen by A.J.L. and A.G.; formal analysis, investigation, and formalization of results were overseen by I.B. All authors have read and agreed to the published version of the manuscript. Funding: The research was carried out as part of the World-Class Research Center Program: Advanced Digital Technologies (contract No. 075-15-2020-934 dated 17.11.2020) and supported by the ENI CBC project KS1054, "Energy-efficient systems based on renewable energy for Arctic conditions". Data Availability Statement: Nature data were gathered from the SCADA hybrid system project in Russian Arctic areas. Data for the icing calculations were collected from the Finnish Icing Atlas (Tammelin et al., 2011) and ice aggregation modeling according to standard ISO 12394:2001.
v3-fos-license
2021-09-28T01:08:57.514Z
2021-07-14T00:00:00.000
237828730
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://sjovs.org/index.php/SJOVS/article/download/131/146", "pdf_hash": "2317678455852c5257ecf19494f5da914d2881e4", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1298", "s2fieldsofstudy": [ "Medicine" ], "sha1": "dd13c9f2619a672106a1bf47d483274b623414e0", "year": 2021 }
pes2o/s2orc
Case finding of dry eye disease in Norwegian optometric practice: a cross-sectional study Optometrists are primary eye care providers, and it is essential that they efficiently identify patients who will benefit from dry eyemanagement. The aim of the studywas to explore case finding of dry eye disease (DED) in optometric practice. A cross-sectional study examining dry eye symptoms and signs in 186 patients (18–70 years of age) attending a routine eye examination, with DED defined according to the criteria of the Tear Film and Ocular Surface Society Dry Eye Workshop II. Standard statistical tests were used, and clinical diagnostics were explored using sensitivity, specificity, and receiveroperating curve (ROC) statistics. Fifty-six patients were contact lens wearers, and they were significantly younger than the non-contact lens wearers (mean age 35 (SD = 1) versus 48 (± 2) years). The mean best corrected visual acuity (BCVA) in the better eye was 1.0 (± 0.1) (decimal acuity). There was no difference in BCVA between contact lens wearers and non-contact lens wearers. The mean Ocular Surface Disease Index (OSDI) score was 22 (± 19), and 138 patients had at least one positive homeostasis marker. Eighty-six had DED, 52 had signs without symptoms, and 23 had symptoms without signs of DED. The sensitivity and specificity of OSDI in detecting any positive homeostasis marker were 62% and 54%, respectively. In all, 106 patients had meibomian gland dysfunction (MGD), of which 49 were asymptomatic. In a ROC analysis, an OSDI ≥ 13 showed a diagnostic ability to differentiate between patients with a fluorescein breakup time (FBUT) < 10 seconds and a fluorescein breakup time ≥ 10 seconds, but not between patients with and without staining or MGD. The majority of patients had dry eye signs and/or dry eye symptoms. Routine assessment of FBUT andmeibomian glands may enable case finding of DED in optometric practice. Introduction The Tear Film and Ocular Surface Society Dry Eye Workshop II (TFOS DEWS II) defines dry eye disease (DED) as "a multifactorial disease of the ocular surface characterized by a loss of homeostasis of the tear film, and accompanied by ocular symptoms, in which tear film instability and hyperosmolarity, ocular surface inflammation and damage, and neurosensory abnormalities play etiological roles" . The prevalence of DED varies from 5% to 50%, depending on the study population and diagnostic criteria, and is higher among females, in older age groups, and among people of Asian ethnicity . DED is associated with ocular pain and irritation, blurred vision, and anxiety and depression, and may limit daily activities and reduce work effectiveness and quality of life. Consequently, DED has significant socioeconomic implications (Li et al., 2012;Stapleton et al., 2017;Uchino et al., 2014;Wan et al., 2016). According to the TFOS DEWS II report, the diagnosis of dry eye should include assessment of both dry eye symptoms and tear film homeostasis markers . When DED is confirmed, further testing for sub-classification of DED and grading of severity is needed as treatment should be tailored to the type and severity of DED. Tests that differentiate evaporative dry eye (EDE) from aqueous deficient dry eye (ADDE) are essential as these conditions are managed differently . Visual function is affected in DED, and decreased vision and transient blurring of vision are common complaints in DED patients (Ishida et al., 2005). Meibomian gland dysfunction (MGD) is the leading cause of EDE and associated ADDE. Among people with DED, 13% to 50% have MGD (Arita et al., 2019;Uchino et al., 2006;Viso et al., 2011). In people over 40 years of age, 38% to 68% have MGD, dependent on population and applied diagnostic criteria . Patients may have MGD without symptoms; these patients are often undiagnosed (Blackie et al., 2010). The TFOS International Workshop on Meibomian Gland Dysfunction (MGD report) suggests that meibomian gland expression should be part of routine examination in adults and that dry eye work-up should be undertaken in patients with MGD regardless of symptoms (Tomlinson et al., 2011). Optometrists are primary eye care providers, and it is essential that they efficiently identify patients who will benefit from dry eye management. Studies report significant differences in examination of dry eye patients and a potential to enhance the identification of patients at risk of DED (Downie et al., 2013;Downie et al., 2016;van Tilborg et al., 2015), consequently indicating a need to improve and standardise the examination and diagnosis of DED in optometric practice. The aim of this study was to explore case finding of DED in general Norwegian optometric practice. Methods The study had a cross-sectional design. The study population was recruited from people attending for a routine eye examination by one dedicated optometrist in each of three Krogh Optikk practices in Trondheim and Oslo, Norway. To minimize observer bias, the optometrists followed written instructions on how to perform the dry eye examination, and standardised equipment was used for all patients. All patients aged 20 to 70 years attending for an eye examination or a contact lens fitting/follow-up during the period between 15th December 2015 and 1st February 2016 were invited to participate. All patients were given oral and written information and gave informed consent to take part in the study. Patients with other known ocular surface inflammations, previous trauma affecting the tear film examination, or known hypersensitivity to lissamine green and/or fluorescein were excluded from the study. Data collection The scheduled routine examination was undertaken, including patient history of contact lens wear, the use of systemic medication and computer screens, as well as decimal visual acuity at six metres equivalent distance. Further, a full dry eye examination was performed. The dry eye examination included the Ocular Surface Disease Index (OSDI) questionnaire, assessment of tear meniscus height (TMH), fluorescein tear breakup time (FBUT), corneal and conjunctival staining, meibum expressibility, and meibum quality. The sequence of tear film tests was the same for all patients, starting with the least invasive tests first. The participants started by answering the OSDI questionnaire. The OSDI questionnaire consists of 12 questions about symptoms, visual function, and environmental triggers, based on patients' experience of symptoms in the previous week. Each question was answered on a scale from 0 (none of the time) to 4 (all of the time). The total composite score (0-100) was calculated according to the formula of Schiffman et al. (2000). A normal ocular surface score is in the range of 0-12; a score of 13-22, 23-32, or 33-100 represents mild, moderate, or severe dry eye symptoms, respectively (Miller et al., 2010;Schiffman et al., 2000). The tear meniscus height (TMH) was then examined with a slit lamp. The width of the slit was adjusted to be identical to the height of the tear meniscus, and the width of the slit in millimetres was recorded as the TMH. The fluorescein tear breakup time (FBUT) was measured by wetting a fluorescein strip with sterile saline solution and shaking off the excess saline; the strip was then carefully applied to the lower temporal conjunctiva starting with the right eye. There was one application of fluorescein in each eye, and no break between the examination of right eye and left eye. The FBUT time was observed using 10 times slit lamp magnification, cobalt blue light, and a yellow barrier filter. The patient was instructed to blink twice and then look straight ahead with their eyes open. The time in seconds from the last blink to the first dry spot appearing was measured by stopwatch and recorded. If the patient blinked before the tear film break was observed, the time to first blink was recorded. The measurement was repeated three times for each eye, and the mean value for each eye was calculated and recorded as the FBUT time. The FBUT for the worst eye was used for analysis. For corneal and conjunctival staining, a strip impregnated with a mixture of 1.5 mg fluorescein and lissamine green was wetted with saline solution and applied to the lower temporal fornix. Corneal and conjunctival staining were observed using 16 times slit lamp magnification, using cobalt blue light with a yellow barrier filter, and white light, respectively. The staining was graded (0-5) according to the Oxford grading scheme (Bron et al., 2003). Meibomian glands in the central part of the lower eyelid were examined for gland expressibility and meibum quality using digital pressure with cotton swabs for all participants. Five glands in the central part of the lower eyelid were graded (0-3) for expressibility: grade 0 when all glands were expressible, grade 1 when 3-4 glands were expressible, grade 2 when 1-2 glands were expressible, and grade 3 when no glands were expressible. The meibum quality of eight glands in the central part of the lower eyelid was graded from 0-3, giving a total score of 0-24. Grade 0 represented clear meibum fluid; grade 1, cloudy fluid; grade 2, cloudy fluid with debris; and grade 3, toothpastelike meibum. MGD was defined as equivalent to stage 2 of the treatment algorithm for MGD, as either grade ≥ 1 for meibum expressibility or a sum score of ≥ 4 for meibum quality (Geerling et al., 2011;Nichols et al., n.d.;Tomlinson et al., 2011). Definition and classification of dry eye disease and MGD Dry eye disease was defined according to the recommendations of the TFOS DEWS II report ). An OSDI score ≥ 13 was set as the criterion for dry eye symptoms. If, in addition, one or both homeostasis markers (FBUT and ocular surface staining) were positive, then DED was confirmed. A positive result for FBUT was defined as < 10 seconds. Positive ocular surface staining was defined as Oxford grade > 1, which is equivalent to > 5 spots in the cornea or > 9 spots on the conjunctiva. TMH and meibomian gland function were used to sub-classify dry eye disease as ADDE, EDE, a mix of both, or unclassifiable. ADDE was defined by a TMH < 0.2 mm and EDE by the presence of MGD. Statistics The data were analysed in frequency and summation tables. Group differences and associations were analysed with standard parametric and non-parametric statistical tests: chi-square, Student's t-test, and Spearman correlation. Clinical diagnostics were explored by the calculation of sensitivity and specificity and receiver operating curve (ROC) statistics. A p-value of < 0.05 was considered statistically significant. Ethics The research conformed to the Declaration of Helsinki, and the study was approved by the Regional Committee for Medical and Health Research Ethics (2015/2492). Results In all, 186 patients were examined, of which 118 (63%) were female. Their mean age was 44 years (± 15), ranging from 20 to 70 years. The mean age of females was 44 years (± 14), and the mean age of men was 45 years (± 15). Fifty-six patients (30%) were contact lens wearers; the contact lens wearers were significantly younger than non-contact lens wearers (mean age 35 (± 1) versus 48 (± 2) years), Student's t-test p<0.001). All patients had normal vision; the mean best corrected decimal visual acuity (BCVA) in the better eye was 1.0 (± 0.1). BCVA was correlated with age (r s =−0.294, p < 0.001). There was no difference in BCVA between contact lens wearers and non-lens wearers or between males and females. The patients' mean OSDI score was 22 (± 19). The OSDI score was not associated with sex, age, contact lens wear, or BCVA. In all, 109 patients (58.6%) had dry eye symptoms; of these, 41 (37.6%), 26 (23.9%) and 42 (38.5%) had mild, moderate, and severe symptoms, respectively. In all, 138 patients (74.2%) had at least one positive homeostasis marker of DED (FBUT < 10 seconds and/or staining > Oxford grade 1), of these 86 had dry eye symptoms (OSDI score ≥ 13) (see Table 1). Reduced FBUT and staining were not associated with sex, age, or contact lens wear. In all, 106 (57.0%) patients had MGD, 49 (46.2%) of these were asymptomatic. Reduced TMH was found in 61 (32.8%) patients, of these 30 (49.2%) were asymptomatic. Among all patients, 34 (18.3%) had both MGD and reduced TMH (see Table 1). Among the symptomatic patients with MGD, MGD and reduced TMH, and reduced TMH, 6 (8.3%), 3 (8.8%) and 5 (18.5%), respectively, did not have positive homeostasis markers (dry eye signs). In all, 86 patients (46.2%) had DED (see Table 2). DED was not associated with sex, age, contact lens wear or BCVA. MGD and reduced TMH were not correlated with DED, sex or contact lens wear. MGD, but not reduced TMH, was correlated with age (r s (186) = 0.255, p < 0.001) (see Table 3). DED could be classified in 59 (68.6%) of the patients with DED (see Table 2). There was no statistically significant difference in the type of DED between males and females or between contact lens wearers and noncontact lens wearers. Twenty-three patients (12.4%) had dry eye symptoms without dry eye signs, and 52 (28.0%) had dry eye signs without symptoms (see Figure 1). The sensitivity and specificity of OSDI in detecting any positive homeostasis marker were 62% and 54%, respectively. Table 4 shows the diagnostic accuracy of OSDI ≥ 13 in identifying people with positive homeostasis markers for DED and MGD. In a ROC analysis, OSDI ≥ 13 showed a diagnostic ability to discriminate between patients with fluorescein breakup time < 10 seconds and fluorescein breakup time ≥ 10 seconds, but not between patients with and without staining or MGD. The optimal cut-off value for the OSDI score was 10.41. Discussion In this study, most participants had symptoms or signs of dry eye disease, and almost half had dry eye disease. The prevalence of DED is at the high end of the previously reported prevalence range . This may reflect the diagnostic criteria in our study. We defined DED based on symptoms and signs according to the guidelines of the TFOS DEWS II report . The definition of dry eye disease in previous studies varies in terms of cut-off values for symptoms and signs, as well as in study populations . Studies using both OSDI and signs report a prevalence of 8.7-10.7%; however, these studies applied a higher cut-off criterion for OSDI (≥ 23 and > 22), and one also applied a lower cutoff criterion for TBUT (Hashemi et al., 2014;Malet et al., 2014). This may explain the higher prevalence found in our study as the TFOS DEWS II also included patients with mild symptoms (OSDI score 13-22) in the diagnosis. Furthermore, the present study included patients attending for a routine eye examination, and they may therefore be more likely to have visual and ocular problems since they are seeking eye care. Nevertheless, our study illustrates the importance of dry eye assessment in optometric practice. ○ Healthy eyes -no sign or symptoms of dry eye (true negative) -14% • Predisposition to DED -signs of dry eye but no symptoms (false negative) -28% Positive OSDI score (OSDI ≥ 13) ○ Pre-clinical DED -symptoms of dry eye but no signs (false positive) -12% • DED -signs and symptoms of dry eye (true positive) -46% Figure 1: Distribution of participants with dry eye, pre-clinical dry eye, predisposition to dry eye and health eyes by ODSI-score and homeostasis markers. DED was not found to be associated with sex, age, or contact lens wear. These findings contradict other studies, which have shown increased prevalence of DED with increasing age (Farrand et al., 2017;Stapleton et al., 2017), a higher prevalence of DED in females than in males (Hashemi et al., 2014;Stapleton et al., 2017), and that DED is associated with contact lens wear ("The Epidemiology of Dry Eye Disease: Report of the Epidemiology Subcommittee of the International Dry Eye Work-Shop", 2007). The lack of association between DED and sex, age, and contact lens wear in our study may reflect the inclusion of all stages of DED and the relatively young age of our participants. Moreover, age-related DED as well as contact lens complications in the younger contact lens wearers could mask differences between contact lens wearers and non-contact lens wearers. Previous studies have shown that differences between males and females become significant only in older age (Paulsen et al., 2014;Stapleton et al., 2017), and comparable studies have examined patients of higher age than in our study. Also, the lack of difference in DED between male and female could be due to the low sample size, and few men included in the study. Our findings may imply that case finding of dry eye disease in optometric practice is equally important in men and women, as well as in both contact lens wearers and non-contact lens wearers. One in five participants with dry eye symptoms did not have findings of dry eye disease, and seven out of ten asymptomatic participants had findings of dry eye disease. This finding is supported by previous studies that have reported a lack of consis-tency and low association between signs and symptoms in DED (Bartlett et al., 2015;Stapleton et al., 2017). This reflects the need for evidence-based guidelines in optometric practice including both symptoms and signs of DED to detect affected patients. By only using history and symptoms, including a questionnaire, some patients who might benefit from management of DED will likely continue to be undetected. The OSDI score significantly differed between participants with and without reduced TBUT. This may reflect an unstable or irregular tear film, affecting optical quality and causing visual disturbance (Herbaut et al., 2019;Koh, 2018). However, there was no significant difference in BCVA between participants with and without DED. Nevertheless, vision may be affected even though visual acuity is normal, as an unstable tear film may cause higher order aberrations (Koh, 2018). Measurement of higher order aberrations was outside the scope of this study. Moreover, the association between TBUT and dry eye symptoms may also relate to dryness of the ocular surface caused by evaporation. Reduced TBUT differentiated between participants with and without MGD, and MGD may cause both ocular discomfort and visual disturbance through a reduced function of the lipid layer, increasing tear evaporation and impeding the spread of the tear film over the ocular surface (Green-Church et al., 2011;Millar & Schuett, 2015). MGD may reduce lipid layer thickness and alter the lipid composition of the tear film, and previous studies report reduced TBUT in all subtypes of MGD (Xiao et al., 2020), as well as improved TBUT and reduced symptoms when MGD is treated Lee et al., 2017). The unstable tear film caused by MGD may cause corneal exposure and staining, and in turn further destabilise the tear film (McMonnies, 2018), increasing tear evaporation and worsening the condition. Half of participants with MGD in our study had no symptoms. The MGD report suggests that dry eye work-up should be undertaken in patients with MGD regardless of symptoms (Tomlinson et al., 2011). This highlights the value of including TBUT as well as the assessment of meibomian gland function in routine eye examinations to detect DED. Almost half of the patients in the study had DED and required treatment to restore homeostasis. In addition, nearly one third were predisposed to DED, and one in ten had pre-clinical dry eye, which should also be considered for the preventive treatment of DED . This underlines the potential role of the optometrist in case finding, prevention, diagnosis, and management of DED. Three out of ten cases of DED had normal TMH and normal meibomian gland function. This was not associated with contact lens wear, and the data were collected in winter, ruling out seasonal allergy and contact lens wear as likely explanations. Therefore, this may reflect other causes of staining and reduced TBUT, such as mucin deficiency and reduced blink rate and blink completeness (McMonnies, 2018) that also affect tear film stability. Mucin deficiency may contribute to increased tear evaporation (Willcox et al., 2017). Evaluation of blink rate, blink completeness, and evaluation of the mucin layer may provide further explanation of the underlying cause of DED. The strength of this study is that it represents a true, reallife clinical setting. All the dry eye tests used are well-known, standardised tests available to optometrists without the need for additional expensive instrumentation. However, the lack of tear osmolarity in our test battery may have underestimated the prevalence of DED. The use of FBUT instead of NIBUT may have affected tear film stability and underestimated the frequency of reduced breakup time and consequently DED. Moreover, it would also be useful to include meibography to support the diagnosis of MGD. In opposition to the discussed possible underestimation of DED, there could also be a selection bias in our study, overesti-mating the prevalence of DED, as people having symptoms may be more eager to participate in the study than participants without symptoms. Our study was undertaken in 2015-2016, prior to the publication of the DEWS II report, hence this study did not include triaging questions that can differentiate DED from signs and symptoms of other causes . However, our analysis did not find any correlation between DED and risk factors like contact lens wear and medication use. Hence the prevalence of DED in our study likely represents true DED. The inclusion of three optometric practices and three different optometrists could also have introduced observer bias into the findings. However, written instructions for the dry eye assessment were given to the optometrists to ensure standardised examination and reduce bias. Conclusion In our study, the majority of patients had dry eye signs and/or dry eye symptoms. More than four out of five benefitted from management of dry eye and pre-clinical findings of dry eye, or advice on pre-disposition to dry eye. Screening with the OSDI questionnaire showed a low sensitivity and specificity in identifying patients with and without positive homeostasis markers. Including assessment of FBUT and meibomian glands in the routine eye examination may enhance case finding of patients with dry eye or those at risk of developing dry eye. The additional use of the OSDI questionnaire in patients with positive homeostasis markers will identify patients with DED or patients at risk of developing DED.
v3-fos-license
2024-05-29T15:14:51.044Z
2024-05-01T00:00:00.000
270080717
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/256086/20240527-19981-gu8fi6.pdf", "pdf_hash": "8801f8b0a951218bbdbc3decd0154023416fd8ac", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1302", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7d7e9c0e891716ced6a183e7e6d92a6788e74bdd", "year": 2024 }
pes2o/s2orc
Herpes Zoster in a 13-Year-Old Male Without Prior Varicella Infection Herpes zoster (HZ) typically presents following reactivation of latent varicella-zoster virus (VZV) in adult and geriatric patients with a history of prior varicella infection. Primary VZV infection in patients compliant with vaccine schedules and without any immunocompromising condition is rare, with reactivation leading to HZ being even rarer. This case report details one such example involving a 13-year-old immunocompetent and fully immunized male with HZ despite no history of VZV infection, as well as possible explanatory mechanisms for this uncommon presentation. This case report contributes to a growing body of literature on atypical HZ presentations in pediatric populations without any history of prior VZV infection or exposure. Vaccination with a live, attenuated VZV strain is the current recommended method of achieving immunity from VZV infection, impeding future HZ eruptions.Primary immunization is achieved by administering a two-dose series at 12 to 15 months of age and four to six years of age [4].Herpes zoster has been reported to be 78% less common in vaccinated pediatric populations without underlying immunodeficiency when compared to their unvaccinated peers in the United States [5].Data from other developed countries, however, have indicated a recent increase in HZ rates in vaccinated pediatric populations [6], further supporting the investigation and differential diagnosis of HZ in patients with vaccination and without prior VZV diagnosis. Case Presentation A previously healthy, immunocompetent 13-year-old male initially presented to his local urgent care with the chief complaint of left lower extremity myalgias.Following a physical exam and evaluation of his medical history, which included active participation on a basketball team, he was diagnosed with a muscle strain of the left thigh and provided with education on muscle strain symptom management.Seventy-two hours later, the patient presented to his primary care provider (PCP) with a complaint of a painful rash on his left lower back.The onset of the rash was several hours prior to his presentation to his PCP, and the patient described it as burning and itching.The patient also reported subjective fevers but denied any other associated symptoms.The patient's parents denied any prior chicken pox diagnosis and affirmed that he had received the varicella vaccine in accordance with routine childhood vaccination scheduling.There is no documented history of immunocompromising diseases (patient or family) or new/altered medication regimens. Upon physical examination, a grouped, unilateral vesicular rash with surrounding erythema (Figure 1) was noted in an L4 dermatomal distribution on his left side.The erythematous portion of the rash was blanchable, and there were no other signs indicative of infection.A clinical diagnosis of HZ was made due to the dermatomal distribution of the rash as well as the patient's clinical history of significant neuropathic pain/myalgia prior to the rash's appearance.The patient was prescribed valacyclovir 1,000 mg three times daily (TID) for seven days, with resolution of his myalgias the day treatment began and resolution of the vesicular rash within five days. FIGURE 1: Clinical presentation of the patient after rash eruption Varicella zoster immunoglobulin M (IgM) and IgG were ordered both on the day of diagnosis and six months following, per parent request (Table 1). Discussion Herpes zoster manifests following the reactivation of the latent VZV.Primary infection with VZV is commonly known as chickenpox, presenting as a wide-spread rash with lesions in different stages of healing and being typically self-limited.Upon reactivation, HZ classically presents as a painful, unilateral vesicular rash in adults, which is currently understood to be due to the age-related decline in virus (VZV)-specific cellmediated immunity.However, pediatric patients with HZ can present slightly differently, with itching and then subsequent pain, fever, and weakness.Childhood HZ also seems to be slightly more common in males than females, though further data is necessary to confirm this finding [7].It is thought that both wild-type and vaccine viruses can remain latent to be subsequently reactivated, given the necessity for an explanatory mechanism of disease pathogenesis in cases such as this one [8]. Despite the potential for the live, attenuated varicella vaccine to become latent in a small subset of patients, the data in support of childhood vaccination are excellent.Current CDC recommendations for pediatric varicella vaccination are to receive the first dose at age 12 through 15 months and the second dose at age four through six years old [4].A case-control study published in the Journal of Infectious Diseases found the effectiveness of receiving both vaccine doses to be 98.3% [9], corroborating current CDC recommendations.The prevention of varicella is vital, as post-infection sequelae include Group A streptococcal infections of the skin and soft tissue, pneumonia, encephalitis, cerebellar ataxia, bleeding diatheses, and sepsis [10].Further, pediatric patients who develop HZ can suffer from debilitating, protracted postherpetic neuralgia. Clinical diagnosis of HZ can be challenging in atypical presentations such as vaccinated patients, particularly with respect to pediatrics.While data on this topic is sparse given the rarity of pediatric HZ, a previous examination of 39 reported cases found that, among the 33 patients who had received the varicella vaccination, the interval between vaccination and the presentation of HZ symptoms varied from 56 days to approximately nine years, underscoring the necessity of including HZ in the differential diagnosis of any patient with unilateral radiculopathy symptoms, even without the presence of a dermatomal rash [11]. Common disease processes with exanthems similar to HZ include herpes simplex, dermatitis herpetiformis, impetigo, and contact dermatitis presentations.Adequate follow-up is also essential for proper diagnosis and treatment and can further be supported by laboratory studies in cases with atypical presentations. Treatment for HZ is dependent on immune status, disease complications, and patient age.Uncomplicated HZ treatment is recommended for any of the following: oral acyclovir 800 mg five times daily for seven days; valacyclovir 1,000 mg TID for seven days; or famciclovir 500 mg three times daily for seven days.Valacyclovir 100 mg TID for seven days has been shown to provide better pain relief than acyclovir 800 mg five times daily for seven days and is likely to have better treatment adherence due to a lower pill burden [12].When post-herpetic neuralgia is identified, famciclovir has been shown to best reduce neuralgia duration.Regardless of the therapy chosen, pharmacotherapy proves to be more efficacious when implemented within 72 hours of cutaneous involvement. Complicated HZ, marked by extensive cutaneous eruptions or visceral involvement, requires parenteral support, with acyclovir 10 mg/kg every eight hours for seven to 10 days for those under one year of age and 500 mg/m 2 for those over one year [13].Patients with immunocompromised states such as HIV typically require referral to guidelines and infectious disease consults on a case-by-case basis. Conclusions The diagnosis of HZ is infrequent in vaccinated populations, particularly pediatrics.Regardless, it is imperative that clinicians retain a high index of suspicion such that even atypical presentations of HZ can be quickly identified and treated to avoid long-term sequelae.Herpes zoster should be ruled out even in cases of fully vaccinated, immunocompetent pediatric patients who present with unilateral, isolated pruritus or discomfort.
v3-fos-license
2018-04-03T00:26:23.239Z
2011-02-01T00:00:00.000
11413286
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "BRONZE", "oa_url": "http://annals.org/data/journals/aim/20224/0000605-201102010-00003.pdf", "pdf_hash": "f0315686d382c19fb39ac0794f075195f9a2a762", "pdf_src": "ACP", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1306", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2112089fcfc4f4f293788c95f696f3b75e5071a5", "year": 2011 }
pes2o/s2orc
Effect of Routine Sterile Gloving on Contamination Rates in Blood Culture BACKGROUND Blood culture contamination leads to inappropriate or unnecessary antibiotic use. However, practical guidelines are inconsistent about the routine use of sterile gloving in collection of blood for culture. OBJECTIVE To determine whether the routine use of sterile gloving before venipuncture reduces blood culture contamination rates. DESIGN Cluster randomized, assessor-blinded, crossover trial (ClinicalTrials.gov registration number: NCT00973063). SETTING Single-center trial involving medical wards and the intensive care unit. PARTICIPANTS 64 interns in charge of collection of blood for culture were randomly assigned to routine-to-optional or optional-to-routine sterile gloving groups for 1854 adult patients who needed blood cultures. INTERVENTION During routine sterile gloving, the interns wore sterile gloves every time before venipuncture, but during optional sterile gloving, sterile gloves were worn only if needed. MEASUREMENTS Isolates from single positive blood cultures were classified as likely contaminant, possible contaminant, or true pathogen. Contamination rates were compared by using generalized mixed models. RESULTS A total of 10 520 blood cultures were analyzed: 5265 from the routine sterile gloving period and 5255 from the optional sterile gloving period. When possible contaminants were included, the contamination rate was 0.6% in routine sterile gloving and 1.1% in optional sterile gloving (adjusted odds ratio, 0.57 [95% CI, 0.37 to 0.87]; P = 0.009). When only likely contaminants were included, the contamination rate was 0.5% in routine sterile gloving and 0.9% in optional sterile gloving (adjusted odds ratio, 0.51 [CI, 0.31 to 0.83]; P = 0.007). LIMITATION Blood cultures from the emergency department, surgical wards, and pediatric wards were not assessed. CONCLUSION Routine sterile gloving before venipuncture may reduce blood culture contamination. B lood culture is a simple and basic diagnostic procedure routinely used in clinical practice that yields essential information for the evaluation of various infectious diseases. A positive blood culture can demonstrate not only an infectious cause of disease but also a microbiological response to antibiotic therapy (1). However, studies have reported that 35% to 50% of positive blood cultures are falsely positive owing to contamination (1)(2)(3). False-positive cultures often cause serious interpretation problems, leading to the use of inappropriate or unnecessary antibiotics, additional testing and consultation, and increased length of stay, all of which increase health care costs (3). In a closed culture system in which blood is drawn directly into vacuum culture bottles, blood culture contamination occurs mainly during specimen collection (4). Various methods have been widely studied to reduce contamination rates, including skin disinfectants (5,6), source of culture (7,8), specialized phlebotomists (9), and changing of needles before inoculating culture bottles (10). To our knowledge, no data are available on the influence of sterile gloving on blood culture contamination rates. Consequently, some controversy exists about whether sterile gloving should be routinely used during collection of blood for culture. The current guidelines do not recommend the routine use of sterile gloving (11), whereas some experts prefer sterile gloving for collection of blood for culture (12). We sought to evaluate whether the routine use of sterile gloving before venipuncture reduces blood culture contamination rates compared with the optional use of sterile gloving in actual clinical practice. Study Design We conducted a prospective, cluster randomized, assessor-blinded, crossover, controlled trial. Our study was conducted for 6 months in 2009 in 17 medical wards, including 14 general wards, 2 hematology wards, and 1 intensive care unit at Seoul National University Hospital, a 1600-bed, university-affiliated tertiary-care teaching hospital in Seoul, Republic of Korea. At this hospital, medical interns rather than dedicated phlebotomists are in charge of drawing blood for cultures. We did not include the emergency department because the emergency medical technicians, as well as interns, draw blood for culture in the emergency department. The interns in the hospital were rotated from one department to another each month. The interns in the medical wards consented to participate in the study and took part in the study for 1 month. In each month, 6 to 7 interns were in charge of the 14 general wards, 2 interns in the 2 hematology wards and 2 interns in the intensive care unit. We included all cultures using blood drawn from a peripheral vein in adult patients who needed 2 or more sets of blood cultures, and we excluded blood cultures from intravenous lines and similar access devices. Consent was obtained from all participating interns. We randomly assigned the interns to routine-tooptional sterile gloving or optional-to-routine sterile gloving groups by using computerized 1:1 random selection stratified by hospitalization unit (general wards, hematology wards, and intensive care unit). The interns in both groups were educated about the standard protocol for blood culture collection based on the current guidelines by using a lecture, video clip, and simulation practice 1 day before they started blood collection for the study (11). The interns were instructed to wear sterile gloves every time before venipuncture during routine sterile gloving but to wear clean, but nonsterile, gloves at the start of the procedure during optional sterile gloving and to change the gloves to sterile ones if needed-for example, when they palpate the vein after skin disinfection. Commercially available, unpowdered, sterilized latex gloves individually packaged in pairs were used as the sterile gloves. The gloves were not reused, and a different pair of gloves had to be opened whenever sterile gloving was necessary. For clean (nonsterile) gloves, we used unpowdered, nonsterilized latex gloves, which were commercially provided in boxed sets of 100 gloves and routinely used on the wards. The clean gloves had to be drawn directly from the box before collection of blood for culture and were not reused. Crossover allocation between the gloving techniques was done on the 15th day of the month. The method for routine or optional sterile gloving was retaught at the time of crossover in order to minimize the carryover effect. Our study design allowed a patient to have blood drawn by using both gloving techniques, by the same intern if near the crossover time or by different interns. The skin disinfectant consisted of 10% aqueous povidone-iodine, and the rubber septums on the blood culture bottles were disinfected with 70% isopropyl alcohol. Without use of needle change methods (10), blood specimens were inoculated into both aerobic and anaerobic vials of blood culture media (BacT/ALERT FA and FN, bio-Mérieux, Durham, North Carolina). Blood cultures were incubated at 37°C for 7 days. Organisms and their susceptibilities to antibiotics were identified by using automated methods and standard criteria (Microscan WalkAway-96, Siemens Healthcare Diagnostics, Deerfield, Illinois). The interns were instructed to record actual gloving methods for an individual patient to investigate their adherence to gloving methods. Classification of Blood Culture Isolates At our hospital laboratory, blood culture bottles are only accepted in paired sets, consisting of an anaerobic bottle and an aerobic bottle. According to the current blood culture guidelines (11), 2 or 3 sets of blood are routinely drawn when blood culture is needed. If any organism was isolated from any bottle in a blood culture set, it was considered a positive blood culture. If an organism was isolated from only 1 set of 2 or more blood cultures done from 1 blood collection, it was considered a single positive blood culture. For example, if 2 sets of blood cultures were done from 1 blood collection and Staphylococcus aureus was isolated from 2 sets, that episode was counted as 2 positive blood cultures and no single positive blood culture. If the same organism was isolated from only 1 set of cultures, it was counted as 1 positive blood culture and 1 single positive blood culture. In cases of polymicrobial cultures, if any isolated organism was classified as a likely or possible contaminant, the blood culture was regarded as a single contaminated culture for calculating contamination rates. Three infectious disease specialists who were blinded to the intern assignments independently classified each isolate from single positive blood cultures as likely contami- Context False-positive blood cultures are common and lead to additional medical testing, unnecessary antibiotic use, and increased health care costs. Contribution In a randomized, controlled trial, use of sterile gloves while drawing blood for culture reduced the contamination rate by almost one half compared with usual practice. Caution Only interns drew blood for culture, and they were not blinded to the type of glove used. The study was done in only 1 institution, and blood for culture was obtained only on general medical floors. Implication Routine sterile gloving may be a useful strategy to reduce blood culture contamination rates. -The Editors Original Research Sterile Gloving in Blood Culture nant, possible contaminant, or true pathogen. If all 3 opinions were different, that of another infectious disease specialist was obtained. Final decisions were made by a majority rule. Likely contaminants were common skin flora, including Bacillus species, coagulase-negative staphylococci, Corynebacterium species, Enterococcus species, Micrococcus species, Propionibacterium species, or viridans Streptococcus, without isolation of the identical organism with the same antibiotic susceptibility from another potentially infected site in a patient with incompatible clinical features and no attributable risks (13). True pathogens were defined as enteric gram-negative bacilli, Pseudomonas species, S. pyogenes, S. pneumoniae, Bacteroides species, and Candida species (14) or by obtaining an identical organism with the same antibiotic susceptibility from another potentially infected site and the organism could account for the clinical features of the patient. Possible contaminants were defined as isolates obtained from 1 set of blood cultures that did not meet the criteria for likely contaminants or true pathogens. Statistical Analysis Our study was designed to determine whether routine sterile gloving during blood culture collection reduces blood culture contamination rates. The sample size necessary to detect a 2-fold decrease in the contamination rate was calculated. We assumed that the contamination rate in the study hospital would be 1%, resulting in 9400 blood cultures being required to detect a difference of this magnitude (power, 0.8; type I error, 5%). Therefore, 6 months was determined to be the study period on the basis of the usual frequency of blood cultures in the study hospital. The difference in blood culture contamination rates was evaluated according to the original group assignment, regardless of the actual gloving methods, by using generalized mixed models with binary outcome. In each model, the patient and intern were included as random effects because of a possible clustering effect by these factors. Gloving method, hospitalization unit, and sequence of gloving methods (routine-to-optional or optional-to-routine) were included as fixed effects. The interaction between gloving method and hospitalization unit or between the sequence and hospitalization unit was not significant. The differences in adherence to the assigned gloving methods and reporting rates for the actual gloving methods used were evaluated by using generalized mixed models with binary outcomes. In these models, the intern was included as a random effect and gloving method, hospitalization unit, and sequence of gloving methods were included as fixed effects. We used the chi-square test to compare the distribution of blood cultures according to hospitalization unit between routine and optional sterile gloving. The statistical analyses using generalized mixed models were done by using SAS software, version 9.2 (SAS Institute, Cary, North Carolina). Other statistical analyses and randomization were done by using SPSS software, version 17.0 (SPSS, Chicago, Illinois). All tests were 2-tailed. A P value less than 0.05 was considered statistically significant. The institutional review board at Seoul National University Hospital approved the study protocol. Role of the Funding Source No external funding supported this study. Baseline Characteristics All 64 interns placed in the medical wards during the study period participated. A total of 10 520 blood cultures from 1854 patients were analyzed, comprising 5265 blood cultures from the routine sterile gloving period and 5255 blood cultures from the optional sterile gloving period ( Figure). The number of blood cultures from each hospital unit was 7027 (66.7%) from general wards, 2446 (23.2%) from hematology wards, and 1047 (9.9%) from the intensive care unit. No significant difference between the routine and optional sterile gloving groups was found in the distribution of blood cultures according to unit (P ϭ 0.55) ( Table 1). The mean number of blood cultures done by an individual intern was 164 (SD, 66; interquartile range, 116 to 193). Adherence to Gloving Methods The interns reported the actual gloving methods for 8082 (76.8%) of 10 520 blood cultures. The reporting rate for the actual gloving methods used was 76.3% (5363 of 7027 cultures) in general wards, 80.5% (1968 of 2446 cultures) in hematology wards, and 71.7% (751 of 1047 cultures) in the intensive care unit (P ϭ 0.133). The reporting rate was 76.8% (4045 of 5265 cultures) in routine sterile gloving and 76.8% (4037 of 5255 cultures) in optional sterile gloving (P ϭ 0.54). No statistically significant difference was found in contamination rates between blood cultures obtained with or without known gloving methods 3% (1818 of 1907) in the optionalto-routine sterile gloving group (P ϭ 0.68). In optional sterile gloving, sterile gloves were worn for 2.8% of blood draws (55 of 1977) in the routine-to-optional sterile gloving group and 11.7% (241 of 2060) in the optional-toroutine sterile gloving group (P Ͻ 0.001). DISCUSSION In this cluster randomized crossover trial, routine sterile gloving just before venipuncture reduced blood culture contamination rates by approximately 50%. To the best of our knowledge, our study is the first to evaluate the influence of sterile gloving on blood culture contamination rates. Although sterile gloving is a basic aspect of aseptic technique, most previous studies did not consider the gloving method when they evaluated blood culture contamination rates (5,6,15,16). To minimize confounding caused by a difference in phlebotomy skills and the consequential contamination risk for individual interns, we used a crossover design and included a random effect of interns for the statistical model. Previous studies found that trained phlebotomy teams decrease blood culture contamination rates compared with resident physicians or nurses (9,17). These findings suggest that personal phlebotomy skills influence blood culture contamination rates. Our data also showed that the contamination rates were diverse according to individual interns. Although blood culture was done by interns rather than dedicated phlebotomists in this study, the baseline Table 2 Original Research Sterile Gloving in Blood Culture www.annals.org contamination rate was relatively low, even when possible contaminants were included; although the baseline contamination rates reported by previous randomized, controlled trials were 3% to 9% (5,8,15,16), our contamination rate was roughly 1% during optional sterile gloving. The exclusion of the emergency department and pediatric ward may partly explain our low contamination rates because contamination rates tend to be higher in these areas than elsewhere (6,8,15). In addition, comprehensive education on the standard protocol for specimen collection might have contributed to the low contamination rates (18). Furthermore, an awareness of the research might increase intern adherence to the standard protocol. Our data imply that adherence to current guidelines can reduce blood culture contamination rates to approximately 1%, as shown in our control period. The lower blood culture contamination rates associated with routine sterile gloving may indicate the possibility of contamination of the nonsterile gloves worn by the interns (19). An outbreak of contaminated blood cultures caused by nonsterile gloves contaminated by Bacillus species was reported (20). However, in our study, the contaminants during optional sterile gloving were diverse and were mainly skin flora, which suggests that an outbreak due to collective contamination of nonsterile gloves was less likely. The difference in blood culture contamination rates between the routine and optional sterile gloving groups was highest in the intensive care unit, in which relatively higher contamination rates during optional sterile gloving may be explained by phlebotomy difficulties due to the poor vascular condition of patients with chronic or severe illness, as well as by the less common use of sterile gloving as self-reported by the interns. A heavy workload in the busy intensive care unit might make optional sterile gloving by interns less common. Some previous studies also reported higher contamination rates in intensive care units than in general wards (21,22), although data comparing the contamination rates of intensive care units and other hospitalization units are limited. These findings collectively suggest that sterile gloving may be preferable to use of clean (nonsterile) gloves for blood culture collection, especially in an intensive care unit. Sterile gloving was self-reported to be used during the optional gloving period in approximately 7% of blood draws. Although this rate seems low, it might reflect actual practice guided by the current guidelines. The increased use of sterile gloving during the optional gloving period when the optional period was first may be explained as follows. First, the initial pressure of the research might make the interns stricter in their use of sterile gloving during the optional gloving period. In addition, some interns who were exhausted by sterile gloving during the routine sterile gloving period might have been reluctant to use sterile gloving during the optional period. In this study, an approximate 50% reduction in rates of blood culture contamination due to the routine use of sterile gloving suggests that it can prevent 1 contaminated blood culture among 100 patients who need 2 sets of blood cultures. The effect of sterile gloving may be larger in a hospital with higher rates of blood culture contamination. Our study has limitations. First, the interns could not be blinded to their gloving methods. Although information on which gloving method is preferred by investigators was not given to the interns, this inevitable nonblinding may have introduced bias. Second, we did not consider the site of peripheral venipuncture or the skin condition of the patients, such as dermatitis, which could influence contamination rates; however, we think the effect of these factors were minimized by the stratified randomization and crossover study design. Finally, we did not include blood cultures from the emergency department, surgical wards, and Original Research Sterile Gloving in Blood Culture pediatric wards, which may preclude generalization of our study results. In conclusion, routine sterile gloving before venipuncture statistically significantly reduced blood culture contamination rates. The use of sterile gloving when collecting blood may reduce contamination rates in blood cultures. From Seoul National University College of Medicine, Seoul, Republic of Korea.
v3-fos-license
2017-09-15T04:03:46.587Z
2011-09-01T00:00:00.000
10136318
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/esa/a/XdQjmyxmv8gjBYhBf7Msc6J/?format=pdf&lang=en", "pdf_hash": "4275b78334bbe1841633f63f95099705e6b98034", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1307", "s2fieldsofstudy": [ "Biology", "Engineering" ], "sha1": "4275b78334bbe1841633f63f95099705e6b98034", "year": 2011 }
pes2o/s2orc
Glucose effect on degradation kinetics of methyl parathion by filamentous fungi species Aspergilus Niger AN 400 This study evaluated the glucose effect on the removal of methyl parathion by Aspergillus niger AN400. The study was conducted in two stages: toxicity tests on plates and assays in flasks, under an agitation of 200 rpm. The methyl parathion concentrations in the toxicity test ranged from 0.075 to 60 mg/L. The second stage consisted on evaluating reactors: six control reactors with methyl parathion solution; six reactors with fungi and methyl parathion, and six reactors containing fungi, methyl parathion, and glucose. The reaction times studied ranged from 1 to 27 days. Methyl parathion concentrations of up to 60 mg/L were not toxic for Aspergillus niger AN400. The first-order kinetic model served as a good representation of the methyl parathion conversion rate. The first-order kinetic constant was 0.063 ± 0.005 h-l for flasks without addition of glucose, while a value of 0.162 ± 0.014 h-l was obtained when glucose was added. Fungi have been employed extensively to remove toxic and recalcitrant compounds.Garcia et al. (2000) used the species Aspergillus niger, Aspergillus terreus and Geotrichum candidum in the removal of phenolic compounds; Volke-Spulveda et al. (2003) studied hexadecane biodegradation by Aspergillus niger; and Bruce et al. (1995) investigated the degradation of pentachlorophenol by the fungi species Phanerochaete chrisosporyum, Trametes versicolor and Inonotus dryophilus. Glucose addition is important to improve the efficiency of bioremediation of persistent compounds like dyes (YANG et al., 2008;RODRIGUES et al., 2010), phenols (RODRIGUES et al., 2007;SILVA et al., 2007) and pesticides (SAMPAIO, 2005;YANG et al., 2008).Singh (2006) reports that glucose addition produces substances of high reactivity, which react more easily with the pollutant.This research focused on evaluating MP removal by Aspergillus niger in the presence and absence of glucose, and on estimating the biological degradation kinetics. Glucose was chosen because it is a primary substratum and the main carbon source for this fungus. Cultivation and production of the fungus species Aspergillus niger AN400 was grown on Petri dishes at a tempera- Introduction The environmental contamination resulting from worldwide indiscriminate, abusive, and long-term use of pesticide is a cause of great concern to public authorities and health providers, for it seriously impacts the sustainability of natural resources and human health. One of the consequences of the widespread use of pesticides in agriculture is the contamination of water bodies.The use of agrochemicals close to flooded areas has led to the intoxication of many fish species (ESPINDOLA et al., 2000).It represents a serious pollution problem that causes environmental imbalance and a high incidence of fish poisoning, which is harmful to aquatic and human life. Several factors are directly related in the persistence and toxicity of these compounds in the environment, including soil and water mobility, half-life in soil and water, frequency of application, climatic conditions, and irrigation (SUDO et al., 2002). According , 1987).Although its activity in the environment is short-lived and little dispersive, methyl parathion (MP) can be highly toxic for humans.Toxicity by this organophosphorate results from the inhibition of the enzyme acetylcholinesterase, which causes acetylcholine to be accumulated in the body, affecting the central nervous system and sometimes leading to fatal respiratory failure (HERNANDEZ et al., 1998).Despite these hazards, this pesticide is widely used in agriculture. Culture Two were prepared culture media. Assays in batch reactors Eighteen Erlenmeyer flasks (250 mL) were used as reactors. They were sealed and divided into sets.The first set consisted on six control reactors containing 100 mL of culture medium 1 (C), with different MP concentrations, while the second one consisted on six reactors containing 100 mL of culture medium, one with six different MP concentrations and two x 10 6 A. niger's spores (PF), and the third set contained 100 mL of culture medium 2 with six different MP concentrations and 2 x 10 6 A. niger's spores (PFG).Table 1 presents the initial MP concentrations in each reactor. All reactors were covered with black plastic bags and subjected to 200 rpm shaking in the shaker used in the first stage.The temperature was kept at 30°C throughout the experiment. The parameters analyzed were pH, volatile suspended solids (VSS) and MP concentrations.Analyses were performed according to APHA (1995). Kinetic evaluation of MP degradation The effect of glucose on the pesticide degradation rate was evaluated though kinetic studies, using temporal profiles of MP concentration for each condition under study.The initial rate (Ro) was estimated at time zero by the mass balance equation for batch reactor (Equation 1). Ro = -dC MP dt Equation 1 Where: C MP is the MP concentration and "t" is the time. Analytical methods The MP was quantified using a Shimadzu Liquid Chromatograph (LC-10 AD), which was equipped with UV-visible diode array detector (SPD-10AVP), column oven (CTO -10AS), and a low-pressure pump system (SL -10 AVP), operating with up to four solvents. The insecticide was separated on a Supelco C18 column (25 cm x 4,6 mm D.I; 5 µm particles), under the following chromatographic conditions: isocratic system with a phase acetonitrile : water -80% (1 mL/min), an initial run time of five minutes, detection at 270 nm, and a 20 µL injection volume. The pH was determined using a Universal Indicator of pH 0-14 paper (Merck), and the VSS were quantified according to the Standard Methods for Examination of Water and Wastewater (APHA, 1995). The samples for analysis, collected in a sterile atmosphere provided by a Bunsen burner, were poured into sterilized Ependorff flasks. The pH was determined at the moment of sampling, and the sample was refrigerated at 4 o C for subsequent determination of the MP concentration.The VSS concentration was determined at the end of the experiment in all samples from the PF and PFG reactors.(INGELSE et al., 2001). Assays in batch reactors Figure 1 shows the concentration of MP in the batch reactors (PF) over time.Clearly, Aspergillus niger was able to remove MP from the liquid phase, since all reactors inoculated with the fungus showed a drop in the MP concentration during the experiment, while the control reactors maintained the same MP concentration throughout the experiment.The inhibitory effect of MP on the removal efficiency was also clear, i.e., the reactors with initial MP concentration of 0.2 mg/L displayed a removal efficiency of 51% after 27 days, while the PF6, which is a reactor with an initial MP concentration of 19.1 mg/L, removed only 2%. Figure 2 shows the beneficial effect on the MP removal rate resulting from the addition of glucose.The highest removal efficiency was 82%, which was achieved by the reactor with the lowest initial concentration (0.62 mg/L).Inhibition due to the insecticide was also evident, for the removal efficiency increased at a lower initial MP concentration; in other words, the highest initial concentration tested (24.89 mg/L) resulted in a removal efficiency of 43%.According to Griffin (1994), glucose presence reduces the lag phase, hastening the exponential growth phase. The enzymatic action of the fungus may have been responsible for the degradation of MP.This fungus possesses several enzymatic systems, such as: glucose oxidase, catalase, lactanase (WITTEVEEN, 1993), cytochrome P450 monooxygenase and ligninolytic enzymes (PRENAFETA BOLDÚ, 2002).citocrome P450, the chloroperoxidase was not capable of cleaving the oxone structures. The influence of glucose on MP degradation can also be evaluated through a kinetic study.The initial Ro values were obtained from the temporal profiles of MP concentration, resulting from several initial MP concentrations (Equation 1).The values of Ro are presented in Table 2, and data for control reactors are not shown. The first-order kinetic model represented well the MP degradation rate data, as shown in Equation 2. Where: R is the overall conversion rate of MP; C MP is MP concentration; and k 1 is the first order kinetic constant. The kinetic constant in the experiments without glucose was 0.063 ± 0.005 h -1 , and with glucose, 0.162 ± 0.014 h -1 .Therefore, we can state unequivocally that the addition of glucose increased the MP conversion rate. The cellular production in the PFG reactors was around 80% (Figure 3), except for PFG6, which contained a higher concentration of MP.The PF reactors showed a decrease in the biomass production with a MP concentration that is increasing.Thus, the addition of glucose led to a different behavior than the one displayed by the reactors without glucose, indicating that, in the range of MP concentrations from 0.62 to 14.52 mg/L, the cellular growth was practically the same. It is assumed that glucose can be indispensable both for the removal of MP and for cellular growth.However, it is necessary to apply statistical test to affirm the importance of the glucose addition for the MP removal. Conclusions MP concentrations of up to 60 mg/L were not toxic to Aspergillus niger AN400.The presence of a glucose concentration of 0.5 mg/L helped the removal of the pollutant.The highest MP removal rate achieved was 82% in the PFG1reactor, which was loaded with the lowest initial MP concentration of the group (0.062 mg/L).Therefore, the presence of glucose was indispensable for MP removal. The first order kinetic model well-represented the rate of MP degradation, particularly in the reactors containing glucose.The kinetic constant in the experiments without added glucose was 0.063 ± 0.005 h -1 and in those with glucose, 0.162 ± 0.014 h -1 , indicating that the addition of glucose hastened the conversion of MP. The cell growth in the PFG reactors was not affected by the increase in MP up to a concentration of 14.52 mg/L, but it was declined at an MP concentration of 24.89 mg/L.In the PF reactors, the cell growth decreased as the MP concentration increased. A kinetic model was adjusted to the Ro values as a function of the initial MP concentrations in the reactors.The Ro values were estimated, and the kinetic model was adjusted using Levenberg-Marquardt' s nonlinear regression method -Microcal Origin 5.0 ® (MARQUARDT, 1963). CytochromeFigure 2 - Figure 2 -Variation of MP concentration as a function of the reaction time in the PFG reactors. Figure 1 - Figure 1 -Variation of MP concentration as a function of the reaction time in the PF reactors. Figure 3 - Figure 3 -VSS profile in the PF and PFG reactors. Table 1 - Initial concentrations of MP used assays in batch reactors of controls, PF e PFG C: controls reactors; PF: reactors with fungi and methyl parathion; PFG: reactors containing fungi, glucose, and methyl parathion. Table 2 - Initial reaction rate (Ro) of MP degradation for reactors with (PFG) and without (PF) glucose * The reaction rate could not be determined based on data obtained under this condition.
v3-fos-license
2023-02-19T14:16:53.060Z
2019-02-07T00:00:00.000
256996176
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-018-38190-2.pdf", "pdf_hash": "8bf4036f3137b523255ea7656b19ec80f05bc455", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1308", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "8bf4036f3137b523255ea7656b19ec80f05bc455", "year": 2019 }
pes2o/s2orc
In vitro culture with gemcitabine augments death receptor and NKG2D ligand expression on tumour cells Much effort has been made to try to understand the relationship between chemotherapeutic treatment of cancer and the immune system. Whereas much of that focus has been on the direct effect of chemotherapy drugs on immune cells and the release of antigens and danger signals by malignant cells killed by chemotherapy, the effect of chemotherapy on cells surviving treatment has often been overlooked. In the present study, tumour cell lines: A549 (lung), HCT116 (colon) and MCF-7 (breast), were treated with various concentrations of the chemotherapeutic drugs cyclophosphamide, gemcitabine (GEM) and oxaliplatin (OXP) for 24 hours in vitro. In line with other reports, GEM and OXP upregulated expression of the death receptor CD95 (fas) on live cells even at sub-cytotoxic concentrations. Further investigation revealed that the increase in CD95 in response to GEM sensitised the cells to fas ligand treatment, was associated with increased phosphorylation of stress activated protein kinase/c-Jun N-terminal kinase and that other death receptors and activatory immune receptors were co-ordinately upregulated with CD95 in certain cell lines. The upregulation of death receptors and NKG2D ligands together on cells after chemotherapy suggest that although the cells have survived preliminary treatment with chemotherapy they may now be more susceptible to immune cell-mediated challenge. This re-enforces the idea that chemotherapy-immunotherapy combinations may be useful clinically and has implications for the make-up and scheduling of such treatments. One way that chemotherapies can sensitise tumour cells to immune-mediated apoptosis is by modifying the interaction between CD95 (FAS/APO-1) and its cognate ligand, FasL (CD178, CD95). Ligation of CD95 with FasL is an important factor in immune-mediated clearance of cancer as this underpins the apoptosis of diseased cells by cytotoxic effector cells, such as natural killer cells (NKs) and αβ or γδ T-cells. Activation of these immune effectors causes upregulation of FasL on the plasma membrane, increasing their cytolytic potential and when FasL on an effector cell ligates with CD95 on the surface of a target cell, such as a tumour cell, it instigates formation of a death-induced signalling complex and initiation of the caspase proteolytic cascade. This ultimately leads to the apoptosis of the target cell. As the CD95 death receptor plays such a critical role in apoptosis, and especially in immune-mediated apoptosis, its expression is significant in the development and treatment of cancers. Tumour cells have developed a variety of ways to either avoid CD95-induced death or turn it to their advantage. Tumours can escape immune surveillance through suppression of CD95 signalling, release of soluble decoy receptors to bind to and inactivate FasL on immune cells 1 and may even induce apoptosis in immune effectors directly by expressing FasL themselves. A common method for tumours to avoid CD95-mediated cell death is by reducing surface expression of the CD95 protein 2 , this eliminates apoptotic signals by preventing interaction with FasL but can be reversed with agents such as the nucleoside analogue 5-azacytidine if the downregulation of CD95 is caused by methylation of DNA 3 . Despite CD95 being implicated in the maintenance and aggressiveness of some cancers 4,5 , it may also be seen as a positive prognostic factor, as many studies have highlighted [6][7][8][9] . Initiating apoptosis in tumour cells through methods involving the augmentation of CD95 signalling is a treatment avenue that has been explored, mostly unsuccessfully, for a number of years 10 but a large swathe of evidence exists suggesting that chemotherapies sensitise tumours to CD95/FasL killing 11,12 shown to be upregulated upon cells after treatment with a number of chemotherapeutics including: doxorubicin, cisplatin, mitomycin C, 5-FU, dacarbazine and gemcitabine 13,14 . This phenomenon is observed in numerous cancers and is thought to be dependent of functional wild-type p53, either increasing translation of the FAS gene or translocation of the protein to the plasma membrane. Other molecules associated with immune-sensitivity, such as TRAIL receptors (TRAILRs) and NKG2D ligands have also been reportedly induced by chemotherapies in various cancer types 15,16 . The work presented here aims to show whether chemotherapies, including the antimetabolite nucleoside analogue gemcitabine (GEM) which is primarily used in pancreatic, non-small cell lung, breast and ovarian cancers and has been used experimentally in colorectal cancers, can increase expression of CD95 on the surface of a panel of tumour cell lines and whether any increase is functional in terms of induced-cell death. Moreover, in-line with recent reports additional signs of immune sensitivity will be explored in terms of expression of death receptors and immune effector ligands. Flow Cytometric Analysis. Cells were stained with fluorochrome-conjugated antibodies specific for CD95 (Biolegend, London, UK); ULBP2/5/6 (R & D) and TRAILR 1 and 2 (Biolegend). MICA/B was stained using an unconjugated primary antibody and anti-species secondary antibody (both Biolegend). Cells were washed prior to resuspending in Cellfix (Becton Dickinson (BD), Oxford, UK). Acquisition of data was performed within 24 hours using an LSRII flow cytometer (BD Biosciences) by gating on live cells and measuring median fluorescence intensity (MFI). MTT Assay. The methylthiazoletetrazolium (MTT) assay was used to measure cell number. Briefly, 0.4 mg/ml MTT (Sigma) was added to cell cultures and plates incubated for 60 minutes. After this time, medium was aspirated off, 200 μl DMSO added to each well and plates agitated gently for before measuring optical density at 540 nm using a microplate reader (Dynex-MRX II, Dynex Technologies Ltd. West Sussex, UK)). Illumina microarrays. RNA was isolated from HCT116 cells using the Qiagen (Manchester, UK) mini-kit protocol following manufacturer's instructions. Microarrays were performed by Dr Jayne Dennis at the St. George's, University of London Biomics Centre. Biotinylated cRNA was generated from 100 ng total RNA using the Illumina TotalPrep RNA Amplification Kit (Applied Biosystems, Warrington, UK) according to manufacturer's instructions. Equal amounts (750 ng) of cRNA were hybridised to the Illumina human HT12-v3 arrays for 18 hours and subsequently processed according to manufacturer's instructions before scanning on an Illumina BeadArray Reader. The image data were processed using default values in GenomeStudio v2009.1 with imputation of missing data, before loading onto GeneSpring v9.0 for data normalisation and filtering. Cignal Reporter Assay. The Cignal Finder ™ RTK 10-Pathway Reporter Array (Qiagen) was used to assess activation of various signalling pathways in HCT116 cells. The manufacturer's suggested protocol was followed with some modifications. Briefly, 50 μl of Opti-MEM ® medium was added to each well of the array plate to resuspend the signalling-pathway-related transcription-factor-responsive reporter and control constructs. Then, 0.5 μl lipofectamine ® LTX ™ in 50 μl Opti-MEM® medium was added to the plate before incubating for 20 minutes at room temperature. HCT116 tumour cell suspension was then added at 3.5 × 10 4 cells/ml. The plate was incubated overnight, before culturing for a further 24 hours with or without the addition of GEM. The transfected cells were cultured with GEM for zero (untreated), one, four or 24 hours. Pathway-specific transcription factor activity in response to GEM was determined using the Dual-Luciferase ® Reporter Assay System (Promega, Southampton, UK) following manufacturer's instructions. Luminescent activity from each sample was quantified with a Promega GloMax ® Multi + Detection Reader. Chemotherapy induces expression of CD95 in tumour cell lines. Our previous studies showed an increase in expression of MHC class I on selected tumour cell lines in response to relatively low concentrations of GEM. Also observed were alterations in other components of the antigen processing machinery 17 suggesting that a coordinated alteration of immunophenotype is occurring in GEM-treated cells. Here we sought to confirm previous data suggesting that DNA-damaging chemotherapies, including GEM, have been shown to increase CD95 on tumour cells 18 . This was tested with HCT116, A549 and MCF-7 cell lines in vitro. Flow cytometry was used to measure CD95 levels on the surface of live tumour cells after 24 hour culture with drugs at previously reported equi-active cytotoxic concentrations: CPM -100 μM for all cell lines, OXP -1 μM for A459 and HCT116 cells and 0.6 μM for MCF-7 cells and GEM -1 μM for A549, 0.6 μM for HCT116 cells and The increase in CD95 was most prominent in the HCT116 cell line, where levels were upregulated by a mean of 518% compared to untreated controls. Culturing cells with CPM resulted in no change from the basal level of CD95 at IC 25. For all cell lines, CD95 was upregulated in a dose-dependent manner ( Fig. 1c) in response to GEM or OXP, however, increases in CD95 were achieved at lower concentrations of GEM in A549 and MCF-7 cells; both cell lines approximately doubling (100% increase) CD95 on the plasma membrane in the presence of 10 nM GEM. HCT116 and A549 cells had a higher maximum relative increase in CD95 than MCF-7 cells, around 7-fold in the former cell lines versus only 4-fold in MCF-7 cells. CD95 was also increased after culture with OXP, especially in the HCT116 and A549 cell lines, although all of the cell lines were less sensitive to CD95 upregulation by OXP compared to GEM and a statistically significant increase was not observed for the HCT116 cell line. Trypan blue cell counts showed that increased CD95 was associated with a reduction in the number of viable cells, linking the upregulation of CD95 to growth inhibitory effects of the drugs (Fig. 1c). GEM-mediated CD95 upregulation is reduced in the presence of JNK inhibitor. The intracellular signalling pathways that underlie GEM-mediated upregulation of CD95 were investigated by assessing the phosphorylation status of signalling proteins in HCT116 cells cultured with GEM. Figure 2a shows that culturing tumour cells with GEM increased signalling through pathways involving the kinases ERK and JNK as shown by increases in the activity of the transcription factors Elk-1/SRF and AP-1, respectively. The maximum increase in signalling for most pathways was achieved at 24 hours, with only the JNK pathway showing any activation at the earliest time-point of one hour. In order to determine whether signalling through these pathways played a role in CD95 upregulation, cell lines were cultured with GEM and either an inhibitor of the ERK pathway, U0126, or the JNK pathway, SP600125. Figure 2b shows that blocking signalling through the JNK pathway significantly inhibited GEM-mediated CD95 upregulation. HCT116 cells were cultured +/− GEM and +/− U0126 or SP600125 and any change in the level of CD95 from controls was assessed by flow cytometry. Data points shown here represent the change from controls (untreated or in the presence of U0126 or SP600125 alone) to GEM-treated (+/− U0126 or SP600125) and are expressed relative to the change in CD95 from untreated to GEM-treated cells. CD95 upregulation was reduced by more than a third by blocking JNK signalling (*p < 0.05). Inhibiting the ERK pathway did not significantly alter GEM-mediated upregulation of CD95. and a cross-linking polyhistidine antibody. The number of viable cells was then measured using the MTT assay. Addition of FasL to untreated cells resulted in a very small decrease in cell number but where GEM caused increases in CD95, combining with FasL greatly reduced the number of viable cells. This decrease in cell number was likely mediated through the CD95 apoptotic pathway due to increased interaction between CD95 on tumour cells and FasL in the medium. In cultures with 5 nM GEM there was a relatively modest 57.7% increase in CD95 compared to controls but the number of viable cells was markedly reduced by the addition of FasL to the culture (Fig. 3a). At this concentration of GEM, 94.1% of the number control cells were still viable with no FasL present, but this was reduced to only 59.1% when FasL was included in the culture. At 50 nM GEM, where expression of CD95 was greatly increased, the difference in viable cell number between cell cultures in the absence or presence of FasL was even greater, changing from 79.0% to 32.1% of controls, respectively. Figure 3b shows that by blocking CD95-FasL interaction the decrease in cell number caused by combining GEM with FasL was negated. A concentration of 10 nM GEM was used in all tests, including the control, which in this instance means without FasL. The reduction in cell number measured with GEM alone was negligible but when FasL was added to the culture a large decrease was observed, down to 40% of the GEM alone control. However, addition of a CD95 blocking antibody restored cell number to similar levels as the GEM-alone control. This suggests that CD95-mediated cell death, through ligation of FasL with its receptor, is responsible for the reduced cell number observed in GEM-FasL combinations and that GEM-induced CD95 is functional as a death receptor. Other markers of immunogenicity. DNA damaging chemotherapies are known to induce stress-related innate immune receptors other than CD95. Therefore, next we assessed whether additional molecules involved in the sensitivity of tumour cells to immune cell killing were altered by culturing with GEM. Firstly, microarray of GEM-treated HCT116 cells (Table 1) suggested that increases in surface CD95 protein were associated with increased transcription of the FAS gene. FAS mRNA increased 2.3-fold in response to GEM. Additionally, microarray showed increased mRNA of the death receptor genes TRAILR1 and 2, and immune cell activatory NKG2D ligands, such as MICB and ULBP2 upon culture with GEM. Expression of TRAILR1 (TNFRSF10A) mRNA was increased 1.4-fold and TRAILR2 (TNFRSF10B) 1.8-fold. While of the TRAIL decoy receptors TNFRSF10C was unchanged and TNFRSF10D increased 1.5-fold, though these values were only just above the detection limit of the assay. Additionally, transcription of key NKG2D ligands MICB (1.6 fold) and ULBP2 (2.0 fold) was increased by culturing cells with GEM as measured by microarray. Furthermore, Fig. 4a shows that TRAILR2 protein was also increased at the surface of cells in response to GEM treatment. Expression of TRAILR1 was undetectable above isotype control in all cell lines and conditions tested. In contrast to its effects on CD95, OXP had no effect on the expression of the TRAIL death receptors TRAILR1 or TRAILR2. The observed increase in MICB mRNA expression in response to GEM was associated with increased MICA/B protein on HCT116 cells, though this did not reach significance. MICA/B expression was not upregulated upon A549 and MCF-7 cells (Fig. 4b). Adding to the complexity of stress-molecule-related cell line responses to GEM, ULBP2/5/6 was strongly increased on the surface of HCT116 and A549 cells but not MCF-7 cells (Fig. 4c). The increased levels of death receptors and NKG2D ligands may render the tumour cells more sensitive to lysis by αβ T-cells, γδ T-cells and NK cells by increasing the chance that an interaction between an immune effector cell and a tumour cell will result in the death of the tumour cell. Discussion In concordance with other reports, culture with GEM led to increased levels of CD95 on the surface of tumour cells. This has been previously observed in vitro 20 and in vivo using aerosolised GEM 21 . However, the upregulation of CD95 in the present study was more pronounced and achieved at lower concentrations than in earlier in vitro reports. This is also the first time that such a response has been shown in HCT116, A549 and MCF-7 cell lines. The increases in CD95 were functional, as culture of GEM-treated cells with soluble CD95L led to an augmented reduction in cell number compared to GEM alone. GEM-mediated increases in CD95 were associated with alterations in the phosphorylation status of the stress-induced kinase, JNK. Apoptosis of colorectal cancer cells in response to direct culture with drugs such as GEM and OXP is reported to result in an increase in CD95 on the surface of the cells 20,22 . GEM and OXP were also shown to increase expression of CD95 on the colorectal, breast and lung tumour cell lines tested in the present study, however, this was not necessarily associated with apoptosis as these changes in CD95 occurred at concentrations were cell number was unaffected. It has been known for some time that DNA-damaging cancer drugs have the ability to increase innate immune receptor expression on tumour cells; so the upregulation of CD95 in these cell lines was no surprise given that they express functional p53, which is thought necessary for activating the CD95 gene upon chemotherapy-induced DNA damage 23 . The increases in surface CD95 protein levels observed in the present study were dose dependant, but, importantly, even small increases in CD95 in response to GEM seemed to enable efficient killing of tumour cells in the presence of FasL. Previous studies have reported that expression of CD95 can be decreased in lung carcinoma compared to non-neoplastic tissue 24 . Therefore, whether the increases observed here are a correction to "normal" or an increase above basal levels is unknown but the margin of CD95 upregulation and associated cell death with relatively low concentrations of FasL suggests the latter. In line with previous reports 25 , after augmenting expression of CD95 with chemotherapy, agonising the CD95 receptor with FasL reduced cell number. This suggested that the increase in CD95 protein observed at the surface of tumour cells was functionally able to transmit apoptotic signals and this occurred even with the modest CD95 augmentation triggered by low concentrations of GEM. Some reports have suggested that the combination of GEM and FasL actually induces a necroptotic form of cell death rather than classical apoptosis but this is yet to be tested 26 . The sensitisation of tumour cells to FasL-mediated cytotoxicity demonstrated in the present study may partly explain the cooperation between GEM treatment and immunotherapy. Especially when GEM treatment, which raises CD95 on tumour cells, is combined with activated immune effector cells (which should express FasL), such as LAK cells 27,28 . It may be that the clinical efficacy of GEM and other drugs could be greatly enhanced by supplementing with a FasL signal. This could be achieved through inducing FasL expression on immune cells, using an activating immune therapy, or by using fusion proteins of FasL, such as CD40-FasL or CTLA-4-FasL. These fusion proteins have had some success in killing malignant cell lines alone in vitro 29 , but this was dependent on the tumour cells constitutively expressing high levels of CD95. The work presented here suggests that inducing higher level of CD95 through culture with GEM may endow a whole new raft of tumour cells with capacity for cell death mediated through the CD95/FasL axis. The same may be true for apoptosis induced through TRAIL. Previous studies have implicated JNK phosphorylation as a consequence of signalling through the CD95 receptor. Here, it is shown that JNK phosphorylation is also important in the upregulation of CD95. It may be that treatment with GEM activates the JNK pathway (which is often activated in response to cellular stress) which induces CD95 upregulation. Any subsequent signalling through CD95 could create a positive feedback loop, with signalling through JNK increasing expression of CD95 to amplify the cell death (or proliferative) signal still further. In addition to the effects on CD95, expression of other protein markers associated with the susceptibility of cells to immune-mediated cytotoxicity were altered in response to chemotherapeutic drugs. TRAILR2 was increased at the surface of tumour cells by GEM treatment. Raising the amount of this death receptor may represent another avenue that can be exploited by the immune system to clear tumour cells. A previous report has shown the sensitisation of HCT116 tumour cells to TRAIL-mediated clearance using chemotherapy 30 . Although chemotherapeutic or genotoxic stresses have been implicated in the upregulation of all of the molecules identified as being upregulated by GEM, this investigation may be the first time that a single agent has been shown to increase CD95, TRAILR2, MICA/B and ULBP2/5/6 on tumour cells. A similar study indicating that treatment with 5-FU or doxorubicin could sensitise colon cancer stem cells to Vγ9Vδ2 T-cell cytotoxicity did look for these markers at the mRNA level but found that only TRAILR2 was significantly upregulated 31 . Whether molecules other than TRAILR2 were upregulated at the cell surface is unknown. Where, in the Todaro study 5-FU and doxorubicin increased TRAILR2 but failed to increase CD95, in the present investigation OXP was shown to increase CD95 but not TRAILR2, suggesting that the modulation of immune sensitivity markers is not a general effect of chemotherapeutics per se but differing mechanisms of action exist dependent on the type of chemotherapy and cell-type used. In addition to augmenting death receptor expression, immune-mediated cytotoxicity of tumour cells may be enhanced by increased recognition of tumours through upregulation of proteins such as ligands of NKG2D. GEM-treated tumour cells were found to express increased amounts of the NKG2D ligands ULBP2 and MICA/B at their surface and microarray analysis suggested this was due to increased transcription induced by GEM treatment. ULBPs and MICA/B are NKG2D ligands involved in enhancing killing by cytotoxic lymphocytes such as αβ and γδ T-cells and NK cells, so, their upregulation on tumour cells is relevant to the immune response to tumour. NKG2D ligand upregulation is linked to DNA damage and cellular stress, such as caused by chemotherapy 32,33 . Indeed, similarly to the present study a recent report states that low dose GEM can induce MICA/B expression on some pancreatic cancer cell lines 34 . A further observation from the study by Miyashita et al. was that soluble MICA/B was released by tumour cells in response to treatment with GEM and that this enhances innate immune cell function. It is possible that the increased gene expression of MICB observed in the present study may also lead to the release of soluble MICB from the cell, although this is as yet untested. Higher expression of NKG2D ligands is linked to a better prognosis for cancer patients 35,36 . In contrast to the upregulation of death receptors and NKG2D ligands, cellular stress can induce tumour cells to produce immunosuppressive factors such as IL-10, TGF-β and PD-L1 and these can negatively affect the activation of-and killing by-immune cells. It is the balance of these activatory and inhibitory signals that determines whether the cell is deleted by the immune system. Whether GEM also causes the type of stress that stimulates tumour cells to heighten their immunosuppressive environment has not been fully investigated but initial experiments have shown that there is no detectable GEM-mediated increase in surface levels of PD-L1 or release of IL-10 in the cell lines tested here and that mRNA of the aforementioned genes is also not increased (data not shown). The upregulation of CD95 and other markers on the surface of tumour cells may indicate that the cells are "searching" for a signal to die. One that is provided in the present study by the addition of FasL to the GEM cultures. The upregulation of FAS, TRAILR1/2, MICB and ULBP2 genes together in response to GEM suggests there may be a coordinated biological strategy by which cells containing DNA damage become sensitised towards immune-cell killing. GEM seems to prime tumour cells for cell death through CD95 or TRAIL, but the biological relevance of increasing the expression of these death receptors on tumour cells will depend on an additional signal provided by the presence of FasL or TRAIL expressing CTLs or NK cells in the tumour milieu, that can induce apoptosis via cognate ligation. Taken together with our previous work which showed that GEM induced immunoproteasomes and altered the peptides that were displayed on HLA-molecules 17 , it suggests that tumour cells treated with GEM are primed for immune-mediated cytotoxicity in a number of ways. These effects may be associated with the DNA damage response in response to GEM. The present study is not the first time that chemotherapy, or indeed GEM, has been shown to sensitise tumour cells to immune-mediated death. But the upregulation of a number of surface proteins key to cell cytotoxicity through the actions of relatively low concentrations of a single chemotherapeutic agent implies a coordinated response to this type of genotoxic stress and permits the activation state of the immune system to ultimately determine the fate of a damaged cell. This is something which may be beneficial in tumour or infected cells where normal damage-evaluation and apoptotic pathways are hijacked. The ability of GEM to induce expression of these molecules together may be one of the reasons for the success of this chemotherapeutic and have particular relevance to the interactions of this drug with immunotherapies. Data Availability Statement There are no restrictions to the availability of materials and data. Accession number for microarray data: GSE122985.
v3-fos-license
2018-02-19T18:15:10.482Z
2018-02-13T00:00:00.000
3343726
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.00168/pdf", "pdf_hash": "19b2b31b558fa916afd9ccef325804437d66d6d0", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1309", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "316bc62e8b161dbfbb1c1314e32ccb1b27980e3d", "year": 2018 }
pes2o/s2orc
Plant Rhizosphere Selection of Plasmodiophorid Lineages from Bulk Soil: The Importance of “Hidden” Diversity Microbial communities closely associated with the rhizosphere can have strong positive and negative impacts on plant health and growth. We used a group-specific amplicon approach to investigate local scale drivers in the diversity and distribution of plasmodiophorids in rhizosphere/root and bulk soil samples from oilseed rape (OSR) and wheat agri-systems. Plasmodiophorids are plant- and stramenopile-associated protists including well known plant pathogens as well as symptomless endobiotic species. We detected 28 plasmodiophorid lineages (OTUs), many of them novel, and showed that plasmodiophorid communities were highly dissimilar and significantly divergent between wheat and OSR rhizospheres and between rhizosphere and bulk soil samples. Bulk soil communities were not significantly different between OSR and wheat systems. Wheat and OSR rhizospheres selected for different plasmodiophorid lineages. An OTU corresponding to Spongospora nasturtii was positively selected in the OSR rhizosphere, as were two genetically distinct OTUs. Two novel lineages related to Sorosphaerula veronicae were significantly associated with wheat rhizosphere samples, indicating unknown plant-protist relationships. We show that group-targeted eDNA approaches to microbial symbiont-host ecology reveal significant novel diversity and enable inference of differential activity and potential interactions between sequence types, as well as their presence. INTRODUCTION Plant roots release considerable amounts of labile exudates and debris into the soil, which results in intense microbial activity in the rhizosphere soil which surrounds roots, and the selection of communities which are structurally and functionally distinct from the bulk soil (Morgan et al., 2005) The rhizosphere microbiome can have major impacts on plant growth and nutrition, which can be both positive and negative, through complex direct and indirect interactions. Although it is widely appreciated that a broad range of microbial groups can inhabit the rhizosphere, most studies have focussed on bacteria and fungi (Mendes et al., 2013;Philippot et al., 2013). For these groups, a range of factors can determine the specific communities which assemble in the rhizosphere, including plant characteristics such as genotype and age, environmental properties including soil type and climate, and in agricultural soils, management interventions such as crop rotation and fertilization (Bennett et al., 2012). Protists are also components of the rhizosphere microbiome (Mendes et al., 2013), and can also have marked impacts on plant growth through direct and indirect pathways (Bonkowski, 2004). However, they are typically not considered in studies of rhizosphere microbiology, largely because culture independent techniques to profile complex protist communities remain limited (Adl et al., 2014), with the result that there is little understanding of the factors which shape protist communities in the rhizosphere. There is one protist group in particular-plasmodiophoridsthat rarely receive attention in rhizosphere ecology studies yet are well known as plant pathogens and virus vectors. Plasmodiophorids (Rhizaria; Endomyxa; class Phytomyxea, Order Plasmodiophorida) are parasites and symbionts of angiosperms and stramenopiles (Bass et al., 2009;Neuhauser et al., 2014). Plasmodiophorid diversity is much greater than the few known parasitic taxa and their broader role(s) in the rhizosphere are of great interest. Plasmodiophorids form obligate associations with their hosts, which are often green plants, but in some instances, they can also infect other parasites including heterotrophic stramenopiles, e.g., Woronina pythii, which infects Pythium spp. (Dylewski and Miller, 1983;Neuhauser et al., 2014). Plasmodiophorids are the causative agents of economically significant diseases of crops including brassicas, potatoes, and grain crops (e.g., maize, rice, wheat, sorghum). Plasmodiophora brassicae is the commercially most important and best studied plasmodiophorid, causing clubroot disease in cruciferous plants such as oilseed rape (OSR). Clubroot has been shown to result in average crop losses of 10-15% on a global scale (Dixon, 2009(Dixon, , 2014Hwang et al., 2012). Other plasmodiophorids include Spongospora subterranea, which causes powdery scab of potato and can also vector Potato Mop Top Virus (Beuch et al., 2015;Falloon et al., 2015). Polymxa graminis can infect most graminaceous crops where it does not cause disease symptoms itself, but can transmit several viruses, such as soil-borne wheat mosaic virus (SBWMV), which is considered as one of the most important diseases of winter wheat in the Central and Eastern USA (Kanyuka et al., 2003). Polymyxa betae vectors Beet Necrotic Yellow Vein Virus, which causes sugar beet "rhizomania, " resulting in ca. 10% loss or the world production of sugar beet (Lemaire et al., 1988;Desoignies et al., 2014;Hassanzadeh Davarani et al., 2014;Biancardi and Lewellen, 2016). As obligate biotrophs, plasmodiophorids require specific, living hosts for the completion of their life cycle and to enable them to successfully reproduce. But beyond these "primary" hosts, in which the full life cycle can be completed, some plasmodiophorids are associated with a variety of alternative hosts, in which often only the sporangial part of the life cycle can be completed, which results in the formation of short lived zoospores. For example, Spongospora subterranea, whose primary hosts belong to the Solanaceae, can also cause small sporangial infections in hosts within the plant families Poaceae, Brassicaceae, Leguminosae and Geraniaceae (Qu and Christ, 2006). Polymyxa graminis, which has primary hosts including most Poaceae, can for example also infect Arabidopsis thaliana as alternative host . Similarly, Polymyxa betae, which was considered to be a specialist pathogen of Chenopodiaceae (Desoignies et al., 2014), has now been found to also infect graminaceaous hosts such as wheat (Smith et al., 2013). The complexity of the plasmodiophorid life cycle (Bulman and Neuhauser, 2017), coupled with their small size (3-6 µm), makes them difficult to study. Specimen-independent molecular probing and sequencing of microbial diversity in environmental samples, referred to as eDNA (environmental DNA) offers an alternative perspective on elusive and cryptic microbes to more classical organism-centric studies (Bass et al., 2015). A recent eDNA study (Neuhauser et al., 2014) investigated plasmodiophorid biodiversity by analysing root and soilassociated plasmodiophorids in a range of habitats, including a vineyard, flood plain and glacier forefield. Eighty-one new OTUs were discovered from just 6 locations, significantly adding to the 41 known phytomyxid (combined plasmodiophorid and phagomyxid) lineages. This suggests that many lineages remain uncharacterized, the biological function of which is unknown. Given the importance of plasmodiophorids as crop disease agents and viral disease vectors, understanding the recently demonstrated "expanded" diversity and distribution of the group within agricultural systems is important to more fully understand crop health, parasite load, and organismal interactions. A novel insight into the structure of plasmodiophorid communities and the factors modulating this can be deduced by objectively partitioning the community into core and satellite taxa (van der Gast et al., 2011). By decomposing the relationship between distribution (number of sample communities that taxa occupy) and mean abundance (across those sample communities), core members of the plasmodiophorid communities can be identified as those that are locally abundant and non-randomly distributed. With the rare satellite taxa defined as those that are typically in low abundance and randomly distributed through sample communities (Hanski, 1982;Magurran and Henderson, 2003). This is particularly relevant given that so little is known about the ecological processes acting on plasmodiophorid communities, including immigration and extinction, and competition and niche partitioning, all of which are currently unknown in plasmodiophorids (Ulrich and Zalewski, 2006). For this study, we designed plasmodophorid-specific PCR primers to generate 18S rDNA amplicons suitable for targeted amplicon high-throughput sequencing (HTS), to determine the local scale drivers of plasmodiophorid distribution and to compare their community assembly in rhizosphere/root and bulk soil samples. The use of HTS approaches for functional ecology studies of micro-eukaryotes lags behind that for diversity and/or phylogenetically oriented diversity. However, the use of HTS for both types of study confers the advantage of being able to detect a phylogenetically defined set of lineages without biases associated with sampling for, and accurately identifying, small and cryptically differentiated microbes. We demonstrate that HTS methods can provide information about host associations of microbial lineages without prior knowledge of the microbes involved or any assumption of specific hostmicrobe relationships. Experimental Design and Sampling A field trial designed to investigate the influence of OSR cultivation frequency on crop yield (Hilton et al., 2013) was used to investigate the roles of crop species (wheat and OSR), sampling time and OSR rotation frequency for controlling plasmodiophorid community assembly in rhizosphere and bulk soil compartments. The field trial was in East Anglia, UK (52 • 33 ′ N and 1 • 2 ′ E), on a sandy clay loam soil with a pH of 6.6 and available P, K, Mg, and SO 4 of 32.4, 111, 28, 30.6 mg kg −1 respectively. In the trial, OSR (cv. Winner) and winter wheat (cv. Brompton) were grown together in different rotation frequencies over a 5 year period ( Table 1). The trial was designed so that each rotation was available for sampling (in different plots) in the 2007 and 2008 harvest season. The field was ploughed and pressed each season before establishment. Drilling occurred at the beginning of September for OSR, mid-September for the first winter wheat, followed by mid-October for subsequent wheat. Local commercial best practice was adhered to for pesticide and fertilizer inputs. For OSR this included autumn herbicide (diflufenican) and insecticide (cypermethrin), and spring insecticides (lamda cyalothrin and cyclohexadione), together with nitrogen and sulfur inputs of 200 and 30 kg ha −1 respectively. For wheat this included autumn herbicide (diflufenican) and spring fungicides (propiconazole, chlorothalnil and cyproconazole) and 100 kg N ha −1 . Samples were collected in year 4 and 5 of the trial, from continuous OSR, continuous wheat, 1 in 2 OSR, 1 in 3 OSR, virgin OSR (year 4 only) and wheat after OSR (year 5 only). In each case samples were collected from year 4 of the trial in June 2007 (pre harvest), and year 5 of the trial in November 2007 (seedling stage), March 2008 (stem extension) and June 2008 (pre-harvest). Rotations were arranged in a randomized block design, with four replicate plots per rotation, and each plot measuring 24 m × 6 m 2 (Table S2). At each sampling time, plants were excavated at 6, 12, and 18 m intervals along the length of the plot, with 6 plants collected from each plot. Bulk soil was collected at the same intervals using a 30 cm auger, and pooled within each plot. Plant roots were shaken free of loose soil, all lateral roots were excised from the tap root, and cut into approximately 5 mm sections. The roots and closely adhering soil were designated rhizosphere (for this study explicitly "rhizosphere" includes both root tissue and rootassociated soil). Equal amounts of rhizosphere were combined from the 6 plants from within each plot, mixed, and 0.5 g representative sub-samples collected for molecular analysis. Bulk soil was sieved though a 3 mm sieve, taking care to avoid inclusion of roots, and a 0.5 g sub-sample removed for molecular analysis. DNA was extracted from rhizosphere and bulk soil using a FastDNA R SPIN kit for soil (MP Biomedicals LLC, UK) following the manufacturer's guidelines for all steps, except to use a Mini Beadbeater-8 cell disrupter for a 3 min period in place of a FastPrep R machine (Biospec products, Inc., USA). 10 µL of the original DNA was diluted with 40 µL of molecular grade water to give a 1:5 diluted stock solution, which was used for PCR. PCR Amplification and Sequencing of Plasmodiophorid 18S rRNA Genes A reference pan-eukaryote alignment (Glücksman et al., 2011) was used to design the new plasmodiophorid-specific primer pair 1301f (5 ′ -GATTGAAGCTCTTTCTTGATCACTTC-3 ′ ) and 1,801 g (5 ′ -ACGGAAACCTTGTTACGACTTC-3), which amplify the V7-V9 region of the 18S rRNA gene (18S rDNA). The specificity of these primers was tested by (1) Blastn searches against NCBI GenBank nr/nt database, (2) alignment against the set of plasmodiophorid sequences in Neuhauser et al. (2014), and (3) amplifying two DNA samples separately positive for Plasmodiophora brassicae and Polymyxa graminis, plus a subset of six soil samples from the present study. Gel electrophoresis showed that all reactions produced a single band of the expected size. The amplicons for P. brassicae and P. graminis were directly sequenced and yielded single partial 18S rDNA sequences of c. 500 bp long that were 99-100% similar to voucher sequences for those species. Products from the soil samples were cloned and sequenced as described in Neuhauser et al. (2014). Blastn and phylogenetic analyses (see below) confirmed that all 48 successfully sequenced clones grouped within the plasmodiophorid clade. To generate amplicons for 454 Sequencing, PCR reactions were carried out using 1 µl of DNA (at a final reaction concentration of 0.2 µM) extracted from soil and rhizosphere samples in 24 µl MyTaq HS mastermix (Bioline, London UK). 10-bp MIDs and A and B 454 adaptors were then ligated onto the amplicons, which were cleaned using AMPure XP beads at a ratio of 0.6:1 and samples were equimolar pooled following quantitation using a Shimadzu MultiNA (Milton Keynes, UK). Sequencing was performed on a Roche 454 GS Junior pyrosequencer (454 Life Sciences/Roche Applied Biosystems, Nurley, NJ, USA) at Micropathology Ltd (Coventry, UK) entirely according to the manufacturers protocol with no deviations (libL emPCR kit) (Roche 454 Sequencing system software manual, v 2.5p1). The sequence data are available via NCBI SRA Study number SRP125323. Bioinformatic Processing of 454 Sequence Data and Phylogenetic Analyses QIIME 1.8.0 software (Caporaso et al., 2010) was used to filter the raw sequence files according to a quality score of 25, sequence length between 200 and 1,000 bp, zero primer mismatches, up to six homopolymers, zero ambiguous bases and a maximum of 1.5 barcode errors. The FASTA files were demultiplexed and partitioned based on sample identifiers. The trimmed sequences were then incorporated into the UPARSE pipeline (Edgar, 2013) to remove singletons (OTUs with <2 sequences across all samples). The amplicon reads were clustered into OTUs (at 97% sequence similarity). The I-ins-i algorithm in Mafft (Katoh and Standley, 2013) was used to align these with plasmodiophorid sequences from Neuhauser et al. (2014) that included the V7 to V9 18S region amplified by the primers developed for this study. The alignment was then refined by eye. Three OTUs branched robustly within the non-phytomyxid outgroup and were shown to be angiosperm sequences. OTUs that were different from a sequence in the reference database by three or more nucleotide positions in two or more variable regions of the amplicon were considered distinct lineages; those more similar to reference sequences were considered to belong to the reference lineage. OTUs that were distinguished only by nucleotide differences in conserved regions were considered non-distinct and removed, leaving a total of 28 OTUs. These sequences were submitted to GenBank (Accession numbers KX263011-KX263038). The refined alignment for Figure 1 were analyzed in RAxML (Stamatakis, 2006(Stamatakis, , 2014) BlackBox (GTR model + gamma; all parameters estimated from the data); bootstrap values were mapped onto the highest likelihood tree obtained (Stamatakis et al., 2008). Bayesian consensus trees were constructed using MrBayes v 3.2 (Ronquist et al., 2012) in parallel mode (Altekar et al., 2004) on the Cipres Science Gateway (Miller et al., 2010). Two separate MC 3 runs with randomly generated starting trees were carried out for 4 million generations each with one cold and three heated chains. The evolutionary model applied included a GTR substitution matrix, a four-category autocorrelated gamma correction and the covarion model. All parameters were estimated from the data. Trees were sampled every 100 generations. One million generations were discarded as "burn-in" (trees sampled before the likelihood plots reached a plateau) and a consensus tree was constructed from the remaining sample. Statistical Analysis Phylotypes were partitioned into core and satellite taxa groups as previously described (van der Gast et al., 2011). Fisher's alpha diversity within each sample community was performed using PAST (Paleontological Statistics, version 3.01) program, available from the University of Oslo (http://folk.uio.no/ohammer/past). Fisher's alpha was chosen as it is a measure of diversity that is relatively unaffected by variation in sample size, and completely independent if N individuals > 1000 (Magurran, 2004). Twosample t-tests with Bonferroni correction, regression analysis, coefficients of determination (r 2 ), residuals and significance (P) were calculated using XLSTAT (version 2015.1.01, Addinsoft, Paris, France). Bray-Curtis quantitative index of dissimilarity, analysis of similarities (ANOSIM), and similarity of percentages (SIMPER) were performed using the PAST program (Hammer et al., 2001). The Bray-Curtis index was used as the underpinning community dissimilarity measure for both ANOSIM and SIMPER. RESULTS From the 160 samples, 196,196 plasmodiophorid sequences remained after quality screening, with an average sequence read length of 404 bp and an average of 1,226 sequences per sample. These resolved into 28 plasmodiophorid OTUs (plus three discarded plant OTUs), whose phylogenetic position in relation to characterized plasmodiophorids with full 18S rDNA sequence overlap with the OTU amplicons is shown in Figure 1. Gray vertical lines to the right of the branch tip labels indicate the five OTUs that were effectively identical (allowing for low levels of PCR/sequencing error) to sequences in public databases. The other 22 OTUs were all novel, although some are likely to correspond with phylotypes shown in Neuhauser et al. (2014), which can't be directly tested as the amplicon regions do not overlap and therefore preclude phylogenetic comparison. Three OTUs (2, 11, 1) were identical to named plasmodiophorids: Polymyxa graminis, P. betae, and Spongospora nasturtii respectively. The OTUs were distributed across the known phylogenetic range of plasmodiophorids. Ten OTUs were closely related to characterized plasmodiophorids (Neuhauser et al., 2014): Sorosphaerula veronicae (associated with the herb Veronica spp.), S. viticola (vitaceous vines), Polymyxa graminis (Poaceae) and P. betae (Chenopodiaceae/Ameranthaceae), Woronina pythii (parasite of the oomycete Pythium spp.), Spongospora subterranea (Solanaceae), and S. nasturtii (watercress). The other OTUs were not clearly related to known plamodiophorids, and in some cases (OTUs 17,14,23,15,19,4,and OTUs 13,22,9,6) formed two diverse clades whose only previously known members were from environmental sequencing studies. The first of these clades is particularly interesting as it has a moderately well supported sister relationship with the clubroot pathogen Plasmodiophora brassicae, with OTUs 4 and 19 the most similar to P. brassicae, but OTUs 14, 15, 17, and 23 are more closely related to sequences from the Baltic Sea (FN690466) and a freshwater lake moss pillar in the Antarctic (AB695525), both high latitude habitats. Furthermore, this clade is apparently absent from the environmental survey in Neuhauser et al. (2014); we hereon refer to this clade as PlasX. The second clade includes the freshwater-derived EU910610 and two previously detected environmental sequences from the Volga floodplain (Neuhauser et al., 2014); which we refer to subsequently as the "PlasY" clade. Since there were only minor effects of sampling time on community composition (Table S1), plasmodiophorid samples were separated into four distinct habitat types from across the 2007 and 2008 seasons: wheat rhizosphere (n = 28), wheat bulk soil (n = 28), OSR rhizosphere (n = 52), and OSR bulk soil (n = 52). Subsequently, the relative abundance and distributions of the plasmodiophorid OTUs were analyzed within a metacommunity framework for each habitat. Distribution abundance relationships (DARs) were plotted to ascertain whether each habitat metacommunity exhibited a significant positive DAR and therefore represented a coherent metacommunity in each instance (Figure 2A). Consistent with this prediction, for each habitat, the abundance of individual OTUs was significantly correlated with the number of sample communities that they occupied. Next, DARs were objectively partitioned into core and satellite OTU groups by decomposing the overall distribution using the ratio of variance to the mean abundance for each OTU. The variance to mean ratio, or index of dispersion, is an index used to model whether taxa follow a Poisson distribution, falling between the 2.5 and 97% confidence limits of the χ 2 distribution. The indices of dispersion were plotted against sample occupancy for OTUs in each habitat metacommunity ( Figure 2B). Of the 22 OTUs that comprised the wheat rhizosphere metacommunity, eight were non-randomly distributed and classified as core OTU group members. 14 OTUs were randomly distributed across samples, falling below the 2.5% confidence limit line, and were classified as satellite OTUs. The phylogenetic distribution of core and satellite OTUs, and the samples in which they were detected are shown on Figure 1. Of the 24 OTUs in the wheat soil metacommunity, 9 were core and 15 satellite. Within the OSR rhizosphere and OSR soil metacommunities there were 8 and 9 core OTUs and 16 and 19 satellite OTUs, respectively. Further, the core group OTUs accounted for the majority of relative abundance in each habitat: wheat rhizosphere, 97.8%; wheat soil, 96.7%; OSR rhizosphere, 97.8%; and OSR soil, 96.9%. Plasmodiophorid diversity between habitats was compared using Fisher's alpha index of diversity (Figure 3). For the whole metacommunities, mean sample diversity was not significantly different between the wheat and OSR rhizosphere (P = 0.28) and wheat and OSR bulk soils (P = 0.18), but significantly different at the P < 0.05 level in all other instances ( Figure 3A). These patterns of diversity were also reflected between core OTU groups: wheat and OSR rhizosphere, P = 0.76; and wheat and OSR soil, P = 0.14 ( Figure 3B). Within the satellite OTU groups, although diversity was more variable, OSR rhizosphere mean diversity was significantly lower when compared to each of the soil satellite groups; P > 0.05 in all instances ( Figure 3C). Analysis of similarities (ANOSIM) tests demonstrated that the whole, core, and satellite groups compared between habitats were highly dissimilar and significantly divergent from each other (P < 0.0001, in all instances; Figure 4), with the exception of wheat and OSR soil OTU groups which were not significantly different (P < 0.05). Similarity of percentages (SIMPER) analysis of the whole metacommunity was used to identify those OTUs that contributed most to the dissimilarity between the four habitat metacommunities. These OTUs are listed in Table 1, with typically, the abundant core OTUs contributing most to the compositional dissimilarity between metacommunities. The impact of habitat type was assessed further at the individual OTU level using volcano plots, by plotting fold-change in relative mean abundance against significance (P) values from two-sample t-tests of differences in relative abundance for each taxon (Figure 5). Minimal significant impact was observed between wheat and OSR soil and wheat Random and non-random dispersal of plasmodiophorid OTUs through each habitat. Visualized by decomposing the overall distribution using an index of dispersion based on the ratio of variance to the mean abundance for each OTU. This is plotted against the number of samples for which the OTU was present in that community. The line depicts the 2.5% confidence limit for the χ 2 distribution. Taxa that fall below this line follow a Poisson distribution, and are randomly distributed and are considered satellite taxa, whereas those that are above the line are non-randomly distributed and are considered core taxa. The 97.5% confidence limit was not plotted, as no taxon fell below that line. FIGURE 3 | Box plot comparisons of plasmodiophorid diversity between habitats for (A) all, (B) core, and (C) satellite OTUs using Fisher's alpha index of diversity. Boxes represent the interquartile range (IQR) between the first and third quartiles and the line inside represents the median. Whiskers denote the lowest and highest values within 1.5 × IQR from the first and third quartiles, respectively. Circles represent outliers beyond the whiskers. Asterisks denote significant differences in comparisons of diversity at the P < 0.05 level determined by two-sample t-tests *P < 0.05 and **P < 0.005). and OSR rhizosphere metacommunities, respectively. While more pronounced significant fold-changes in relative abundance were observed between rhizosphere and soil metacommunities, irrespective of whether planted with wheat or OSR. ANOSIM showed that rotation (i.e., crop frequency over the preceding 3 years) had no effect on the whole rhizosphere plasmodiophorid communities in OSR, but did affect the bulk soil community (Table S1A). However, for wheat there was evidence for effects of rotation on composition of both rhizosphere and bulk soil FIGURE 4 | Analysis of similarities (ANOSIM) and community dissimilarity between habitats for (A) all, (B) core, and (C) satellite plasmodiophorid OTUs. Given is the ANOSIM test statistic (R, as columns) and probability (P, asterisks) that two compared groups are significantly different at the P < 0.05 level (all significant differences were less than P < 0.0001). ANOSIM R and P values were generated using the Bray-Curtis measure of dissimilarity. R scales from +1 to −1. +1 indicates that all the most similar samples are within the same groups. R = 0 occurs if the high and low similarities are perfectly mixed and bear no relationship to the group. A value of −1 indicates that the most similar samples are all outside of the groups. Also given are the Bray-Curtis quantitative measures of dissimilarity between groups, denoted as circles. communities. These effects were mediated through effects on the core community, with no rotational effects on community composition for rhizosphere or bulk soil of either crop in the satellite community (Table S1B,C). DISCUSSION Our group-specific PCR approach coupled with HTS showed that plasmodiophorids are common and diverse in both rhizosphere and bulk agricultural soils. Positive relationships between FIGURE 5 | Changes in plasmodiophorid OTU abundances between habitats. Visualized using volcano plots displaying fold-changes in relative abundance of OTUs between compared habitats. Positive and negative values represent increases and decreases in relative mean % OTU abundance within a habitat when compared to another habitat. The gray horizontal line depicts P = 0.05, with OTUs above that line having significant fold-changes in relative abundance, whereas those falling below the line are not significant. Numbers for OTUs with significant fold-changes in abundance are also given. abundance and distribution have been observed at many spatial scales for taxa when classified into different types of ecological organization (for example, guild or community; Guo et al., 2000). Within the current study, we also observed significant positive DARs (Figure 2A). This indicated that plasmodiophorid OTUs, that were widely distributed throughout each habitat metacommunity were more locally abundant than the taxa with a more restricted distribution. Therefore, as has been observed for other ecological communities, the commonness and rarity of plasmodiophorid taxa within the different metacommunities was found to be related to their occupancy in the local communities (van der Gast et al., 2011), and allowed decomposition of DARs to objectively catergorize core and satellite OTUs (Figure 2B). Plasmodiophorid assemblage differed strongly between the rhizospheres of OSR and wheat and between all rhizosphere and bulk soil samples. The only non-significant community comparison was between the two bulk soil samples. Therefore the plants clearly exerted a selective force modifying the plasmodiophorid community in rhizosphere/root samples. It is known that the exudates of certain roots trigger germination of resting spores of P. brassicae and then can serve as alternative hosts (Friberg et al., 2006;Rashid et al., 2013), and it is likely that root exudates stimulate the germination of other phytomyxid species in a similar way. Plasmodiophorid diversity was lower in the rhizosphere compared to bulk soil, illustrated by significantly lower Fisher's alpha diversity indices, particularly with respect to core OTUs in both wheat and OSR rhizospheres (Figure 3). Further, Plasmodiophorid community composition was significantly different in this respect (Figure 4). In these cases the rhizosphere is selective: recruiting a subset of the pool of diversity from the bulk soil into the rhizosphere microbiota. This trend has been observed in other microbial taxa in the rhizosphere (Morgan et al., 2005). Even if the change of local biodiversity is simply caused by missed detection of some OTUs due to relatively very high representation of others, or if its caused by an imbalance in copy numbers of the genes, it is indicative of a diversity shift between rhizosphere and bulk soil habitats. Significant shifts in the frequency of OTU occurrence in the rhizosphere implies a positive or negative functional relationship between plasmodiophorids and plants, enabling us to infer a functional relationship between an environmental sequence and its hosts potentially offering a tool for the identification of secondary interactions. In some cases these involved characterized plasmodiophorids, which corroborated or extended existing knowledge (Braselton, 2001). However, the majority of the OTUs defined in the study were phylogenetically distant from characterized species. Bacterial and fungal communities inhabiting the rhizosphere are shaped by a variety of factors, including plant genotype, rotation and plant age (Hilton et al., 2013;Chaparro et al., 2014). In our study, most variation in plasmodiophorid rhizosphere communities was associated with plant species, with rotation having minor effects on the community in wheat but not OSR. In OSR and wheat crops 15 and 11 OTUs respectively were significantly differently distributed between plasmodiophorid metacommunities in rhizosphere compared to soil habitats (Figures 5A,F). However, the community shifts were not the same in each case. In the OSR rhizosphere all but three of the 15 significantly different OTUs decreased in relative mean % abundance compared to OSR soil OTUs ( Figure 5F). The three OTUs with increased abundance in the OSR rhizosphere were OTU 1 (the brassicaceae-associated Spongospora nasturtii), OTU 7 (Spongospora-like), and OTU 17, which groups with the environmental PlasX clade. Of the other 12 OTUs which had decreased abundance in rhizosphere relative to soil, 2, 4, 5, 9, 24, and 26 had the greatest abundance shifts and lowest P-values: OTU 2 is the largely cereal-associated Polymyxa graminis which will only form secondary, alternative infections in OSR roots, OTU 4 is a novel member of PlasX, more deeply-branching than OTU 17, OTU 5 is the oomycete parasite Woronina pythii, OTU 9 groups in the PlasY clade, and OTU 24 and 26 are related to the Plantaginaceae-associated Sorosphaerula veronicae (Figure 1). These may be associated with other angiosperm species (e.g., weeds; Veronica spp. are abundant agricultural weeds) present in the bulk soil area but not in the crop plant rhizospheres, or other organisms (e.g. oomycetes) associated with other plant species. The sequence of OTU 1 was very similar to that of Spongospora nasturtii (AF310901), differing only in one homopolymer region in the amplicon, and therefore possibly representing the same sequence in this region. However, the traditional taxonomic concept of phytomyxids based on the morphology and arrangement of the resting spores is not always well reflected in molecular phylogenies. Different plasmodophorid species can have very similar 18S rDNA sequences (Neuhauser et al., 2014), so this does not prove that OTU 1 is S. nasturtii, but it is certainly very closely related. S. nasturtii causes crooked root disease of water cress (Nasturtium officinale, a brassica) in the UK (Claxton et al., 1998), and also Belgium, France, and the US (CABI, EPPO, 2011). Therefore we now show that this or a closely related lineage is preferentially associated with OSR (another brassica), and also negatively associated with wheat rhizosphere relative to proximal bulk soil. Overall the increase of OTU1 in the OSR rhizosphere is interesting from a biological point of view as it indicates a wider host range within brassicas including OSR of the crook root parasite S. narsturtii or the existence of a closely related lineage capable of infecting OSR. Equally interesting are OTUs representing currently uncharacterized lineages detected only via our environmental sequencing, which were also responding positively and negatively to the rhizosphere habitat, indicating that these lineages interact with the respective plant or another closely plant-associated organism. It is notable that in addition to OTU 7, which has no close relative in Figure 1 and OTU 17 in the PlasX clade, the related OTUs 14 and 23 were also positively associated with OSR, being detected only with OSR in this study, in both soil and rhizosphere. Whether or not these interactions are those of a fully compatible host-parasite pathosystem or if they are of an "alternative host" type cannot be answered at this stage. But any form of increased interaction of host and parasite will have an ecological role which quickly can translate into productivity changes in the agricultural context. The wheat rhizosphere showed different plasmodiophorid associations compared to bulk soil from OSR plots. Of the 11 OTUs which were significantly differently distributed between wheat rhizosphere and proximal bulk soil, only two were relatively more abundant in the rhizosphere: OTU 24 (which decreased from OSR soil to rhizosphere), and the closely related OTU 25, both of which are closely related to Sorosphaerula veronicae but almost certainly not the same species. Molecular phylogenies have shown previously that Sorosphaerula, Polymyxa and Ligniera form a well-supported clade with not as well defined borders between the genera (Neuhauser et al., 2014). The known hosts of S. veronicae include different Veronica spp. (Plantaginaceae). It is worth noting that a species called S. radicalis has been described from the root hairs of different grasses from the UK (Ivimey Cook and Schwartz, 1929) as well as Ligniera pilorum which was reported from Poa spp. root hairs (Karling, 1968). However, no DNA sequences of these species are available It is therefore possible that some of the OTUs found here might correspond to already described species without a validated DNA record. The nine OTUs less frequently detected in wheat rhizosphere than bulk soil (OTUs 1,2,4,5,6,7,8,9,11; Figure 5A) include seven (2,4,5,6,9,11) that were also more abundant in OSR bulk soil than OSR rhizosphere. However, the other two (OTUs 1 and 7; Spongospora relatives) were more abundant in OSR rhizosphere than soil, converse to the situation in wheat. Directly comparing wheat with OSR rhizosphere samples (Figure 5B), the Sorophaerula relatives OTUs 24, 25, 26, 28, and the PlasX member OTU 4 were significantly less abundant in OSR, whereas OTUs 1, 3, 7, 18 (Spongospora relatives), and 17 (long-branched PlasX) were significantly more abundant in OSR. Therefore the rhizosphere habitat of both crop types positively and negatively selected plasmodiophorid lineages from the surrounding bulk soil. Although these sets of OTUs overlapped, they were not identical, with different recognized genera being significantly shifted in each case. Further, the direction of OTU abundance shifts differed between crop species. It is important to note that different plasmodiophorid taxa are associated with particular plant hosts with which they from fully compatible interactions. But from all the species mentioned above in relation to wheat and OSR rhizospheres it is known that a number of other hosts can be utilized (at least) for a short time. Our results suggest some strong associations between protist and plants that were not previously recognized, and it will be up to future research to indentify the biological basis of these associations. Three OTUs were found in soil only and not any rhizosphere samples-OTUs 16 and 20 (both branching basally to the Sorosphaerula clade), and OTU 19 (sister to OTU 4, which showed negative abundance shifts in both wheat and OSR rhizospheres relative to soil). These OTUs are not sufficiently phylogenetically close to any characterized plasmodiophorid to infer their ecological roles. It is possible that they are directly excluded by other, positively rhizosphere-associated plasmodiophorids or other organisms. Another possibility is that they may be symbionts of other plant species present at the site in limited abundance (weeds) or that they are present in the form of resting spores, or that these lineages are symbionts of other organisms occurring throughout the soil which are relatively less abundant in the more specialized rhizosphere communities. The only lineages with significantly different distribution between the OSR and wheat bulk soil samples were OTUs 6 and 12, which both group in clades not known to be associated with higher plants: OTU 6 in PlasY (Figure 1), closely related to an environmental sequence from a freshwater aquifer (and therefore possibly a parasite of an aquatic alga or oomycete), and OTU 12 grouping in a clade with Woronina (a parasite of oomycetes). In the light of the high plasmodiophorid diversity detected in this study the absence of any OTU identical or similar to Plasmodiophora brassicae is notable. The PCR primers used had no mismatches with available P. brassicae sequences. However, there is no history of clubroot disease at the sites studied, so its absence is not unexpected. We show for the first time that many plasmodiophorids beyond the five well-studied plasmodiophorid agricultural parasites (P. brassicae, Spongospora subterranea, S. nasturtii, Polymyxa graminis, P. betae) are present in significant numbers in agricultural soils and in the rhizosphere, even if their known primary hosts are absent or rare. Generally, it is assumed that the distribution of plasmodiophorids follows that of their hosts, but the fact that phytomyxids can use alternative hosts (Neuhauser et al., 2014) means that predicting their diversity and distribution is non-trivial. Within the wheat rhizosphere samples Polymxya graminis-like OTUs 2 and 24 were predictably dominant, as wheat is a primary host plant of P. graminis (Table 1). On the other hand, the S. nasturtii-like OTU 1 dominated the OSR rhizospheres. The primary host of S. nasturtii is another brassicaceous plant (Nasturtium spp.), therefore the organism represented by OTU1 may interact in a similarly compatible way with a range of brassicas. Spongospora sp. are known vectors for plant viruses (Merz and Falloon, 2009), pointing to an additional interesting aspect of this interaction. The DNA-based detection used in this study does not itself discriminate between active and dormant forms. However, in this system it is apparently sensitive enough to strongly indicate shifts in interaction dynamics between host and symbionts. We show that functional as well as phylogenetic and distribution information can be inferred from environmental sequencing (eDNA) methods combined with a structured and biologically informed sampling strategy. Our results show that diverse plasmodiophorid lineages were positively associated with rhizosphere/root samples compared to bulk soil, and that the enriched lineages were different in wheat and OSR rhizosphere/roots indicating that selection processes in the rhizosphere/root play a role in the establishment and persistence of plant-associated phytomyxids with the potential to increase or decrease the load of pathogenic species. AUTHOR CONTRIBUTONS DB, ST, and GB designed the research, ST and SH performed the research, DB, CvdG, and GB performed the analyses, DB, CvdG, SN, and GB wrote the paper.
v3-fos-license
2024-01-16T16:24:41.532Z
2024-01-01T00:00:00.000
266990443
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/24/2/438/pdf?version=1704899416", "pdf_hash": "1b3b0ec13b9e3225f9fb32c402929710cb1736b3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1311", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "df3b304c90dfe3c5a299436cc80280aacb41b2fc", "year": 2024 }
pes2o/s2orc
End-to-End Encrypted Message Distribution System for the Internet of Things Based on Conditional Proxy Re-Encryption In light of the existing security vulnerabilities within IoT publish–subscribe systems, our study introduces an improved end-to-end encryption approach using conditional proxy re-encryption. This method not only overcomes limitations associated with the reliance on a trusted authority and the challenge of reliably revoking users in previous proxy re-encryption frameworks, but also strengthens data privacy against potential collusion between the broker and subscribers. Through our innovative encryption protocol, unauthorized re-encryption by brokers is effectively prevented, enhancing secure communication between publisher and subscriber. Implemented on HiveMQ, an open-source MQTT platform, our prototype system demonstrates significant enhancements. Comparison to the state-of-the-art end-to-end encryption work, encryption overhead of our scheme is comparable to it, and the decryption cost is approximately half of it. Moreover, our solution significantly improves overall security without compromising the asynchronous communication and decentralized authorization foundational to the publish–subscribe model. Introduction To realize data communication among a large number of entities, large-scale Internet of Things (IoT) system generally uses the publish-subscribe (pub/sub) paradigm for data distribution.The most commonly used protocols which work in the pub/sub paradigm are Message Queuing Telemetry Transport (MQTT) [1], Advanced Message Queuing Protocol (AMQP) [2], etc.The subscribers can subscribe to a "message topic", and publisher can publish messages to the "message topic".All subscribers will receive the publisher's message through the routing of the message broker between the publisher and subscribers.The pub/sub paradigm decouples the senders and receivers in time and space.They do not need to be directly connected or online simultaneously.It is more flexible, efficient, and scalable than the point-to-point data exchange mode.A typical pub/sub-based IoT system includes three types of components: IoT devices, a message broker, and a user management application.The device and the management application serve as the publisher and subscriber, respectively.The device publishes the data collected by its sensor to a specific topic, and the authorized user of the device subscribes to that topic through the application.The message broker in the middle routes the data to all the authorized users. At present, Transport Layer Security (TLS) [3,4] is widely used in the industry to protect the data between the client (publisher or subscriber) and the message broker.The broker decrypts the ciphertext from the publisher to obtain the plaintext, encrypts the plaintext again with the key negotiated with each subscriber, and then forwards the corresponding ciphertext to the subscriber.However, the broker can obtain all the data generated by the client and has complete control over the user's data [5].Furthermore, the message broker maintained by IoT manufacturers is not completely trusted.Recent research shows that no-security policies are implemented by a large number of accessible message brokers, which allows anyone to receive data or inject messages [6].Some researchers investigated a specific traffic monitoring system and discovered that the MQTT message broker was disclosing the traffic flow conditions in a specific area of Mexico City [7].Even if the message broker deploys a security policy, users need to totally trust the broker.Currently, the message broker in pub/sub-based IoT system are deployed either by IoT device manufacturers based on existing commercial message servers (such as EMQX [8], HiveMQ [9] or Solace [10]), or based on the IoT cloud platform provided by third-party cloud computing manufacturers (such as Alibaba cloud [11], Amazon AWS cloud [12]).The message broker is established and maintained by the IoT device manufacturer or the IoT cloud platform [13].These manufacturers are not entirely reliable.If the administrator operates incorrectly or is bribed by a spy, or if the manufacturer is for profit purposes, user data are likely to be abused or shared with unauthorized entities, which poses a threat to the security of user data. In addition, at present, most pub/sub-based IoT systems are constructed based on MQTT protocol, which is not designed for hostile environments.Jia Yan et al. [14] found that the MQTT protocol has serious defects.The platform using this protocol can enable adversaries to steal users' private information and forge users' device status. In order to prevent the threats brought by malicious message brokers, PICADOR [15] uses proxy re-encryption (PRE [16]) technology to provide end-to-end encryption from publishers to subscribers.In PRE, given a re-encryption key to the semi-trusted proxy, the proxy can convert the ciphertext encrypted with the public key of user A into the ciphertext encrypted with the public key of user B without obtaining any plaintext information from the ciphertext.The proxy re-encryption process can be described as: In PICADOR, the publisher encrypts its message with its public key.The broker re-encrypts the published message using the re-encryption key of each subscriber and then sends the corresponding ciphertext to each subscriber.The subscriber can decrypt the ciphertext with their private key.PICADOR needs a trusted authority to generate the re-encryption key for each subscriber according to the publisher's private key and each subscriber's public key.When revoking the authorization of a subscriber, the simplest way is that the broker no longer re-encrypts messages for the revoked subscribers.However, the broker is not entirely trusted.If it is compromised and still re-encrypts for the revoked user, then the revoked user can still receive the latest message.Therefore, depending on the broker for user revocation is not completely reliable.If we do not rely on the broker to revoke users, then we can only change the public-private key pair of the publisher once revocation is required and regenerate the re-encryption key for all the remaining authorized users, which would result in frequent changes to the public-private key pair of the publisher.The long-term public-private key of the user is usually used for authentication either, both between users and between users and brokers.If the user's public-private key pair changes frequently, then there will be inconveniences encountered during authentication. The reason of the problem in PICADOR is that traditional proxy re-encryption allows the proxy to convert all ciphertexts without restriction [17].As long as the proxy possesses the re-encryption key from A to B (rk A→B ), the proxy can convert all ciphertexts encrypted with the public key of A into ciphertexts which could be decrypted using the private key of B. This all-or-nothing feature is not suitable for applications that need fine-grained authorization of decryption capability.Based on this, Weng et al. proposed the concept of Conditional Proxy Re-encryption (CPRE) [18], which allows for the conditional conversion of ciphertext.In CPRE, when generating ciphertext using user A's public key, a condition value is introduced at the same time, and the re-encryption key from A to B are also related to a condition value.Only when the condition value used for generating the ciphertext is equal to the condition value related to the re-encryption key, the proxy can convert the ciphertext encrypted with the public key of A into the ciphertext encrypted with the public key of B. A can prevent the proxy from performing unauthorized re-encryption by controlling the change of condition value [17].The process of conditional proxy reencryption can be described as: Because CPRE has significant advantages over PRE in fine-grained authorization, this paper introduces CPRE for the first time to realize end-to-end encryption in pub/subbased IoT system to prevent broker from performing unauthorized re-encryption.We investigate a large number of existing conditional proxy re-encryption schemes.According to the principles of low computing and communication overhead and high security, the conditional proxy re-encryption algorithm proposed by Weng et al. in 2009 [19] is selected in our system. In our system, the publisher uses its private key, conditional value, and subscriber's public key to generate the conditional re-encryption key for each subscriber, and sends the conditional re-encryption key to the broker.When publishing a message, the publisher encrypts the message with its public key and condition value and sends the ciphertext to the broker.The broker uses the re-encryption key of each subscriber to re-encrypt the message and then sends the re-encrypted ciphertext to the corresponding subscriber.Finally, the subscriber can obtain the plaintext by decrypting the message with their private key.When a subscriber is revoked, the publisher updates the condition value and generates new condition re-encryption keys for each remaining subscriber.Specifically, the main contributions of our system are as follows: (1) The conditional proxy re-encryption (CPRE) algorithm is introduced to solve the endto-end encryption problem in the pub/sub-based IoT system. the re-encryption key is associated with a condition value.By changing the condition value, the publisher can ensure that the proxy can not perform unauthorized re-encryption, thereby achieving reliable revocation of subscribers.(2) By using an open-source MQTT message server, HiveMQ, we implement a prototype end-to-end encryption system for a pub/sub-based IoT system based on CPRE, and further enhance the system's performance through hybrid encryption and hash chain.Moreover, the performance of the system is tested, which shows that our system is not only easy to implement on existing commercial message servers, but also has high performance. Related Works 2.1. End-to-End Encryption in IoT At present, a large number of scholars have studied the end-to-end security in pub/sub-based systems [20,21].This section survey the current research status of end-toend encryption schemes according to the technology used. The scheme based on a trusted message broker: Jia Yan [14] proposed MOUCON to solve access control issues in MQTT, the MQTT broker in MOUCON is responsible for verifying each client's access to the message.Clients must fully trust the broker and cannot resolve the security threat to user data caused by an untrusted broker. The scheme based on the trusted key server: Markus et al. [5] propose an end-toend security scheme for Cyber-Physical Systems (CPS).The scheme relies on trusted key servers to distribute topic keys for publishers and subscribers.The key server stores the global authorization information of the system and the encryption key.If the key server is compromised by the adversary, all the account information, authorization information, and the encryption keys of the system will be exposed. The scheme based on identity-based encryption (IBE): JEDI [20] realizes the end-toend encryption between devices and users in IoT, facilitates asynchronous communication, and supports decentralized authorization for the key.The scheme does not require any modifications to the message broker, which is convenient for deployment.However, this method uses the identity-based encryption algorithm with wildcards (WKD-IBE) [21], the algorithm has a high computational complexity, making it unsuitable for resourceconstrained IoT devices. The scheme based on secret sharing: Sana Belguith et al. [22] proposed an efficient and revocable secure publish-subscribe system.The system divided the broker into three parts to handle the functions of topic matching, routing, and message sending separately.As long as the adversary does not simultaneously break all three brokers, the solution remains secure.However, this solution requires the customization of a special message broker, which is inconvenient to deploy. The scheme based on special hardware: Segarra et al. [23] restrict the broker to run only in a trusted execution environment (TEE) [24], thus ensuring that the broker functions as intended by the deployer.However, the installation and deployment of TEE requires professional management, as well as its maintenance, which results in high costs.In addition, there are security attacks against the current mainstream TEE [25], so TEE still has some security risks. The scheme based on proxy re-encryption: PICADOR [15] implements end-to-end encryption between publishers and subscribers using proxy re-encryption.As can be inferred from the above analysis, this scheme depends on a trusted authorization center to generate re-encryption keys and relies on the broker to revoke users.However, this approach has the drawback of unreliable revocation. Conditional Proxy Re-Encryption Schemes Mambo and Okamoto [26] first introduced the concept of decryption capability authorization, which has higher performance than the method of decrypting and then encrypting the ciphertext.In 1998, Blaze, Bleuner, and Strauss formally introduced the concept of proxy re-encryption (PRE) [16], and since then, lots of research has been carried out around PRE. PRE allows a semi-trusted agent to transform the decryption capability of a ciphertext without obtaining any valid information about the ciphertext, and is widely used in encrypted email transmission, secure distributed file systems, and encrypted spam filtering, etc. PRE can be categorized according to different criteria.Based on the direction of re-encryption, PRE can be categorized into one-way PRE and two-way PRE.Additionally, based on the number of re-encryptions allowed, it can be categorized into single-hop PRE and multi-hop PRE. Traditional proxy re-encryption is unable to provide fine-grained authorization of decryption capabilities.In response, Weng et al. introduced the concept of conditional proxy re-encryption (CPRE) [18] and developed the first CPRE scheme.The re-encryption key of the scheme consists of two parts: the re-encryption key and the conditional key.However, the scheme only considered the security of the second ciphertext layer and not the first ciphertext layer.Weng [27] pointed out that the scheme described in the literature [18] is vulnerable to Chosen-Ciphertext Attacks (CCAs).Weng [27] redefined a more stringent security model for CPRE and proposed a new efficient CPRE scheme.Both Shao [28] and Liang [29] proposed CCA-secure identity-based CPRE under the DBDH (Decisional Bilinear Diffie-Hellman Problem) assumption.However, the literature [30] points out that the scheme given by Liang [29] is insecure. Fang et al. [31] proposed an anonymous CPRE scheme that enables keyword search.Subsequently, Jae Woo Seo et al. [32] proposed a type-based privacy requirement engineering (Type-based PRE) scheme, where "type" refers to a keyword that is equivalent to "condition" in CPRE.Thus, this scheme is essentially similar to the CPRE scheme, which achieves fine-grained authorization of user decryption capabilities.Son et al. [33] proposed a CPRE for big data sharing on cloud platforms by outsourcing the re-encryption key generation and decryption to the servers.Qiu et al. [19] and Liang et al. [34] proposed CCAsecured CPREs, respectively.Ge et al. [35] proposed an identity-based CPRE scheme.The authors proposed an identity-based CPRE scheme that enables contingent gate computation on conditionals.Hu Xiong et al. [36] introduced a unidirectional multi-hop identity-based CPRE scheme that facilitates flexible and efficient data authorization in cloud computing environments and demonstrated its security in the standard model.Arinjita et al. [37] presented a conditional proxy re-encryption scheme that does not require pairwise operations. The scheme is not reliant on the bilinear pair operation for construction and has lower computational overhead.However, in this scheme, if the receiver conspires with the agent, the agent can compute the sender's private key as long as it possesses two conditional re-encryption keys.Therefore, the scheme proposed by Arinjita et al. [37] is not able to resist the conspiracy attack between the agent and the receiver. The following compares various CPRE (Conditional Proxy Re-encryption) schemes, and the results are presented in Tables 1 and 2. The schemes proposed in the literature [18,29,37] exhibit security issues and are therefore not included in the comparison.Let |G| and |GT| denote the bit lengths of elements in groups G and GT, respectively.|Z p | represents the bit length of elements in the prime field |Z p |. |m| stands for the bit length of the plaintext.t p and t e represent the time required for a single bilinear pairing operation and a single exponentiation operation, respectively.|σ| represents the bit length of the signature output by a strongly unforgeable one-time signature algorithm.svk is the length of the verification key for the strongly unforgeable one-time signature algorithm.t v is the time taken to verify a strongly unforgeable one-time signature.t represents the size of the access tree, and w denotes the size of attributes. In the practical implementation of the CPRE algorithm, the re-encryption algorithm is executed by a semi-trusted agent, which is generally deployed on servers or clouds with more abundant resources.In contrast, the encryption and decryption operations are typically performed by IoT devices or personal handheld electronic devices with significantly smaller computational resources than the agent.Therefore, the overall principle in selecting CPRE algorithms is to choose the scheme with lower computational and communication overhead.When computational and communication overheads are comparable, the scheme with lower encryption and decryption overheads is chosen.In terms of security, the schemes in the table are all provably CCA secure in either the standard model or the stochastic prediction model.Although the schemes proven to be secure under the standard model are theoretically more reliable, in this paper, we choose the schemes proven to be secure under the stochastic prediction model.This is because the security of such security schemes depends only on the hash function itself, and no practical attack has yet occurred that can compromise a practical cryptographic algorithm proven to be secure under the stochastic prediction model (excluding some carefully constructed counterexamples by humans [38]).In addition, these security schemes are computationally more efficient and have a wider range of applications. Based on the aforementioned principles, this paper utilizes the conditional proxy re-encryption algorithm proposed by Weng et al. [27] in 2009 to achieve the encryption of the publish-subscribe system of this paper from the publisher side to the subscriber side. Preliminaries The conditional proxy re-encryption CPRE scheme includes the following algorithms, and its workflow is given in Figure 1: Setup λ k : Input the security parameter λ k , and the algorithm outputs the public parameter params. KeyGen λ k : Each entity uses this randomized key generation algorithm to generate a public-private key pair (pk i , sk i ). ReKeyGen sk i , ω, pk j : Input the private key sk i of the sender, the condition value ω and the public key pk j of the receiver, the re-encryption key generation algorithm outputs the re-encryption key rk i ω →j from sender i to receiver j.Enc 1 (pk i , m): Input the public key pk i of the sender and plaintext m, the first layer encryp- tion algorithm outputs the first layer ciphertext CT i .The ciphertext cannot be re-encrypted. Enc 2 (pk i , m, ω): Input the public key pk i of the sender, the plaintext m and the condi- tion value ω, the second-layer encryption algorithm will output the second-layer ciphertext CT i,ω .This ciphertext can be re-encrypted using an appropriate re-encryption key into a first-layer ciphertext for different recipients. ReEnc CT i,ω , rk i ω →j : Input the second-layer ciphertext CT i,ω , the re-encryption key rk i ω →j , the proxy runs the re-encryption algorithm to output the first-layer ciphertext CT j . Dec 1 CT j , sk j : Input the first-layer ciphertext CT j and private key sk j , the first-layer decryption algorithm outputs plaintext m or error symbol ⊥. Dec 2 (CT i,ω , sk i ): Input the second-layer ciphertext CT i,ω and private key sk i , the second-layer decryption algorithm outputs the plaintext message m or error symbol ⊥. By introducing a conditional value into the generation of the re-encryption key and the second layer encryption algorithm, the conditional proxy re-encryption algorithm ensures that the proxy cannot perform unauthorized re-encryption. System Framework Our CPRE-based end-to-end encryption system consists of three types of entities: IoT devices (referred to as the sender), message broker, and multiple authorized users (referred to as the receiver). Our system utilizes the conditional proxy re-encryption algorithm proposed by Weng et al. [19] to achieve end-to-end encryption from the publisher to the subscribers in pub/sub-based IoT system.The specific algorithm design can be found in the literature [19].In our system, the device owner generates a re-encryption key for each authorized user and send the re-encryption key to the Broker.The device acts as the sender and encrypts the message with its public key and a condition value.The broker re-encrypts for each subscriber using the corresponding re-encryption key.All authorized users can decrypt the ciphertext using their private key.The framework of our system is shown in Figure 2. for each remaining recipient with the device's private key sk P , the new condition value ω ′ , and the public keys of the remaining authorized users pk S i .The owner no longer generates re-encryption keys for the revoked users.The device uses the new condition value ω ′ to generate the second layer ciphertext CT P,ω ′ .The broker re-encrypts with the new re-encryption key CT P,ω ′ for the remaining authorized users, so that the remaining legitimate users can decrypt the message correctly.Since the re-encryption key of the revoked users is not updated, it is still associated with the previous condition value ω, and the new ciphertext corresponds to the updated condition value ω ′ .Even if the broker is compromised and still re-encrypts the message for the revoked users-as the condition value in the re-encryption key and second layer ciphertext are not equal-the revoked user cannot decrypt the re-encrypted ciphertext correctly with their private key. System Workflow Specifically, the workflow of our CPRE-based end-to-end encryption scheme includes the following steps. User Registration When a user wants to utilize our system for device management or monitoring, he must complete the user registration process through the client application.The algorithm KeyGen 1 k of CPRE is integrated into the application, allowing the user to generate their public and private key pairs pk S i , sk S i .The user keeps their private key sk S i secret. Device Registration When a user purchases a new IoT device, the user is the owner of the device and is responsible for device registration and authorization control.Typically, a newly purchased IoT device starts its life cycle with "device discovery" [3].In this stage, the device owner requests to add a device through the client APP, and the APP establishes a local connection with the device to complete the device registration and the binding of the device and its owner.We assumes that during the "device registration" phase, the device interacts with the client APP of the device owner through a local connection, exchanging basic information and performing mutual authentication.During the device registration process, the device owner utilizes the built-in key generation algorithm KeyGen 1 k of the APP to generate public and private key pairs (pk P , sk P ) for the device.Additionally, the owner generates a random initial condition value ω 0 , and transfers the initial condition value ω 0 and device's public-private key pair (pk P , sk P ) to the device through the "Local Connection" established in registration phase.The device owner also keeps the private key sk P of the device secretly, which is used to generate the conditional re-encryption key for other authorized users. Authorization Phase When the device owner wants to authorize the access rights of the device to other users, the device owner uses the private key sk p of the device, the condition value ω 0 , and the public key pk S i of each authorized user to generate a conditional re-encryption key rk P ω 0 →S i for each authorized user, which will be sent to the broker of the publish-subscribe system. Message Transmission Stage The device runs the CPRE encryption algorithm, encrypts the collected information with its public key pk P and condition value ω 0 and obtain the ciphertext CT P,ω 0 = Enc 2 pk p , m, ω 0 , which will be sent to the broker.Then, the broker re-encrypts the ciphertext according to the conditional re-encryption key rk P ω 0 →S i of each authorized user and sends the re-encrypted ciphertext CT S i = ReEnc CT P,ω 0 , rk P ω 0 →S i to the corresponding authorized user.Finally, each authorized user uses their private key sk S i to decrypt the ciphertext CT S i and obtains the plaintext m = Dec 1 CT S i , sk S i . Revocation Phase When the device owner needs to revoke the access permission of one user, first, the device owner randomly selects a new condition value ω 1 ; then, the owner generates a new conditional re-encryption key for each remaining authorized user with the user's public key pk S j , the device's private key of sk P , and the new condition value ω 1 .Furthermore, the updated re-encryption keys are sent to the broker.Finally, the device owner encrypts the new condition value with the public key of the device and sends the ciphertext to the device. The device decrypts with its private key sk P to obtain the new condition value ω 1 , it updates the condition value to the new one.Similarly, when the broker receives the new conditional re-encryption key rk P ω 1 →S j distributed by the device owner, the conditional re-encryption key will also be updated from rk Most IoT devices are low-power devices with limited resources, so we use hybrid encryption to further reduce device-side overhead.Before encrypting the collected messages, the device first selects a random symmetric key k, encrypts the key with CPRE and sends to the broker.Each authorized subscriber can decrypt the re-encrypted ciphertext with their private key, thereby obtaining the same symmetric key k.Since then, subsequent communications between the device and each subscriber can be encrypted using the symmetric key k. Hash Chain Whenever the set of authorized users changes (such as new users join or old users are revoked), if CPRE is used to generate new conditional re-encryption keys for all remaining authorized users, when the authorized user set changes frequently, generating and transmitting conditional re-encryption keys imposes significant overhead on the device owner.Therefore, when a new user joins, a symmetric key is distributed to the new user by means of the hash chain [39].The CPRE algorithm is used only when the user is revoked. Specifically, the process of distributing a symmetric key to a new user by using a hash chain is as follows: Assume that the symmetric key shared between the device and each subscribing user is k before the new user joins.Before a new user joins, the device owner uses k as the input of the one-way trapdoor function to obtain a new session key k 1 = Hash(k), and sends the new session key to the new joined user.At the same time, the device owner broadcasts the key update command, so that the device and other legitimate users can also update their shared key k to k 1 through the one-way trapdoor function.Then, the consistency of the shared session key between the device and all its authorized users can be ensured. System Analysis Below, we analyze our CPRE-based end-to-end encryption system in IoT. Satisfy Confidentiality When a new user is authorized to join, the session key is updated through the hash chain.Based on the unidirectionality of the hash function, the new user cannot deduce the session key k before joining by using the session key k 1 . When the user is revoked, the system uses CPRE to update the session key.The device owner randomly selects a new condition value, and generates new conditional re-encryption keys for the remaining users, but no longer generates new conditional reencryption keys for revoked users.Therefore, when the device uses CPRE encryption to transmit a randomly selected new session key, the new condition value is used, and the broker also uses the new conditional re-encryption key for re-encryption, so that the remaining legitimate users can use their own private key to decrypt and obtain the new session key.The conditional re-encryption key of the revoked user is also related to the old condition value, even if the broker colludes with the revoked user, still re-encrypts with the old conditional re-encryption key.As the condition value in the cipertext of the device is not equal to the condition value in the revoked user's re-encryption key, so the revoked user cannot decrypt or obtain the new session key.Based on the above analysis, the revoked user cannot obtain a new session key even if he colludes with the broker.In addition, our scheme is constructed based on the CPRE algorithm, and the broker can only use the re-encryption key to convert the ciphertext, and cannot obtain any plaintext message of the ciphertext based on the security proof of CPRE in [19]. Support Asynchronous Communication When a new user joins, the session key is updated through the hash chain, and the offline user does not affect the session key update process between the online user and the device.When the user is revoked, the update of the conditional re-encryption key involves the device owner.As long as the device owner is online, the broker can obtain the latest conditional re-encryption key, and the device can obtain the latest conditional value, offline users do not affect the process.In short, offline users do not affect the key update process of online users, and our scheme supports asynchronous communication. Support Decentralized Authorization In our scheme, the authorization and revocation of device access rights are only controlled by its owner and do not rely on trusted third parties.If a device owner is compromised by an adversary, only the security of the owner and its managed devices will be affected, and the security of other devices and users will not be affected. Support the Decoupling of Publishers and Subscribers Our scheme is designed based on the CPRE scheme.The device uses its public key to encrypt the data.The device does not need to encrypt for each authorized user, and does not need to care about which users subscribed to its data.The broker completes the conversion and distribution of the ciphertext sent by the device.Therefore, the publisher (ie, device) and subscriber (ie, user) in our scheme are decoupled, and their relationship is controlled and managed by the device owner. Prototype Implementation and Performance Analysis This section implements the above-mentioned IoT end-to-end encryption system and tests the system's performance. Implementation of the Prototype System We implement each module based on the Java language.Our prototype system includes three types of entities: the publisher, multiple subscribers, and the message broker which can perform re-encryption operations.The implementation of the CPRE algorithm [19] is based on the JPBC (the Java Pairing-Based Cryptography Library) [40].The establishment of the message broker is based on the open-source MQTT server HiveMQ [9], which supports customized function extensions.The development of the publisher and the subscriber is based on the Eclipse Paho Java Client library [41].The prototype system runs on a laptop configured with an Intel Core i7-5600U 2.6GHZ CPU and 8G RAM (Intel, Santa Clara, CA, USA).The computer operating system is Windows 7. The development environment of the publisher, subscriber, and Broker is IntelliJ IDEA 2018.1.6.The functions of the publisher and subscriber are simulated using a Java console program. CPRE Implementation The CPRE scheme used in our system is constructed based on the symmetric bilinear group.Therefore, we use the configuration file "a.properties" provided by the JPBC library to generate a type A symmetric bilinear group based on a prime-order elliptic curve y 2 = x 3 + xmodp(p ≡ 3mod4), while the base field size is 512 bits, and the embedding degree is 2. Implementation of Message Broker The construction of Broker is based on the open-source MQTT server HiveMQ Community Edition [42], which provides the SDK (HiveMQ Extension SDK) [43] that supports extension development.Through this, users can develop custom business logic to extend the functions of HiveMQ with the SDK, such as intercepting or controlling MQTT messages, integrating other services, statistics, and adding fine-grained security solutions.We use the core part of HiveMQ to implement message routing and forwarding, implements message re-encryption in a customized extension, and re-encrypts the message payload for each subscriber based on the re-encryption key of each subscriber. HiveMQ includes multiple types of Interceptors, which provide convenience for intercepting and modifying MQTT messages in extensions.Our customized extension is implemented through HiveMQ Interceptors.The goal of our customized extension is to re-encrypt the payload of the message according to the subscriber's re-encryption key after the message is routed, but before the message is forwarded to the subscriber.Therefore, our customized extension is implemented using the Publish Outbound Interceptor in HiveMQ, which allows the extension to intercept the PUBLISH message after the broker is routed, and allows different modifications to the payload corresponding to each subscriber. Implementation of the Client The implementation of the publisher and the subscriber is relatively easy, which is based on the Eclipse Paho Java Client library and the JPBC library is added to support the encryption and decryption of the published and received messages based on CPRE. Performance Analysis This section tests the performance of each module of our system based on the prototype system built in the previous section. Overhead of Distributing Session Keys Using CPRE When the device receives a new condition value, it will randomly generate a new session key.The key will be encrypted with the device's public key and the new condition value, and then ciphertext will be sent to the Broker.The broker re-encrypts the ciphertext for each subscriber and sends the re-encrypted ciphertext to each subscriber.Each subscriber decrypts the re-encrypted ciphertext with its private key, and obtains the session key.When the device uses CPRE to transmit an AES (Advanced Encryption Standard) [44] key (128 bits), the computational overhead of each module (device, broker, and subscriber) in our system is shown in Figure 3.The computation overhead of the device is approximately 46 ms.The overhead of the subscriber to is approximately 35 ms.The processing time required by the broker increases linearly with the number of users.As shown in Figure 3, for each additional user, the processing time of the Broker increases by approximately 9 ms.The Broker is generally deployed on a server or cloud platform with rich resources, which can easily cope with this overhead increase.The computation overhead of the device and users does not change with the number of users, making it suitable for IoT devices with limited resources. Overhead of Secure Communication After the device and the subscriber have established a shared symmetric key, the communication between the device and the subscribers are encrypted by the symmetric encryption algorithm AES.We compare the computation overhead of the publisher and subscriber when they use AES for message transmission than when they transmit in plaintext. For multiple message sizes (128 B, 512 B, 1 KB, 2 KB), Figure 4 shows that encrypted transmission between the publisher and the subscriber incurs an increased overhead of approximately 0.2-0.3ms per message.Therefore, the use of symmetric keys for secure transmission brings only a slightly increase in computation cost. Comparison with Related Schemes Table 3 compares the current end-to-end encryption schemes in IoT in terms of security, whether it supports decentralized authorization, whether it is convenient to deploy, and performance.Note: -represents the performance of these schemes was not evaluated on our prototype. • Security: The message brokers in most IoT systems are not completely trusted.In a scheme that completely relies on the message broker, the broker can obtain all the information of the user, which does not meet the confidentiality requirements.• Support decentralized authorization: If relying on a third-party trusted key server, the authorization and revocation of device access rights must be completed through the key server, and decentralized authorization is not supported.PICADOR relies on a trusted authority to generate re-encryption keys for all users of the system and does not support decentralized authorization too.• Easy to deploy: Reference [22] divides the functions of a broker into multiple ones, and new brokers need to be customized to meet the corresponding functions.Reference [39] needs to install special hardware, which is difficult to deploy.• Performance: The schemes that rely on trusted brokers do not meet the confidentiality requirements, and the literature [22,23] is difficult to deploy.Therefore, these schemes are not re-implemented on our experimental platform.Ref. [5] relies on a trusted key server, the scheme uses symmetric key to establish session key, so [5] distributes symmetric keys faster than our scheme.However, in the secure communication stage, the overhead of using symmetric keys to encrypt and transmit messages is equal to our scheme.Ref. [20] implements the scheme of PICADOR, whose performance is comparable to [20].In order to have a fair comparison with [20], we re-implement the WKD-IBE algorithm in [20] using our crypto library.In [20], the encryption algorithm takes almost 42 ms to encrypt 128 bits of data, and the decryption algorithm takes about 62 ms to decrypt and obtain the plaintext (the decryption time contains the time to generate a decryption key for the encrypted pattern and time to decrypt the ciphertext.When testing the computation overhead, we use a pattern of 20 attributes representing the URI and the last six attributes representing the time.).The encryption overhead of our scheme is comparable to that presented in [20], while the decryption cost is approximately half that presented in [20]. Above all, some existing schemes rely on the trustworthiness of the broker, and their security assumptions are strong, which cannot meet the confidentiality requirements; some schemes rely on a third-party trusted server for authorization and revocation, and do not support decentralized authorization; some solutions need a customized broker or rely on special hardware, which is inconvenient to deploy.Ref. [20] is a solution with good security and deployability.However, it relies on the more complex WKD-IBE algorithm, and its performance is lower compared to our proposed solution. Conclusions In the publish-subscribe-based IoT system, the communication between devices and users is not one-to-one direct communication, but one-to-many asynchronous communication and forwarded by the broker located in the middle.Currently, TLS is commonly employed to safeguard the security of data transmission between the device and the broker.However, the broker has the ability to access the plaintext of all messages, rendering this method ineffective in preventing the security and privacy risks posed by untrustworthy brokers to device data.We present and implement a new end-to-end encryption system based on conditional proxy re-encryption.Theoretical analysis and experimental results demonstrate that our proposed scheme is not only theoretically safe, but also highly practical, feasible, and efficient in practice.However, our system cannot efficiently revoke users, which will form the focus of our future research. PublishFigure 2 . Figure 2. The framework of IoT end-to-end encryption system.CPRE differs from PRE in that its re-encryption key rk P ω →S i and the second-layer ciphertext CT P,ω are both associated with a condition value ω.If the authorization for certain users need to be revoked, the device owner just need to generate a new condition value ω ′ and send to the device.Then, the device owner generates a new re-encryption key rk . Then, the device encrypts the message with the new condition value, and the broker re-encrypts with the new conditional re-encryption key.4.3.System Optimization 4.3.1.Hybrid Encryption Figure 3 . Figure 3. Computation overhead of each module to distribute session key based on CPRE. r e a s e d c o p u t a t i o n a l o v e r h e a d ( m s ) M e s s a g e B l o c k S i z e P u b l i s h e r S u b s c r i b e r Table 1 . Comparison of the computation overhead of each CPRE algorithm.+ t p + t s 6t e + t p + t s (9 + 2w + t)t p + t v t e + 8t p + t v Table 2 . Comparison of the communication overhead of each CPRE algorithm. Table 3 . Comparison of existing end-to-end encryption scheme.
v3-fos-license
2023-04-16T15:14:55.178Z
2023-04-01T00:00:00.000
258168519
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "e7e6bb416b9cd268c58a11445eabb9e69d93785a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1312", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "6fd794c640c0f94dab3d01de9732172ddcf34df9", "year": 2023 }
pes2o/s2orc
Synthesis and Characterization of a Novel Composite Edible Film Based on Hydroxypropyl Methyl Cellulose Grafted with Gelatin A novel composite edible film was synthesized by grafting gelatin chain onto hydroxypropyl methyl cellulose (HPMC) in the presence of glycerol (used as a plasticizer) using a solution polymerization technique. The reaction was carried out in homogeneous aqueous medium. Thermal properties, chemical structure, crystallinity, surface morphology, and mechanical and hydrophilic performance changes of HPMC caused by the addition of gelatin were investigated by differential scanning calorimetry, thermogravimetric, Fourier transform infrared spectroscopy, scanning electron microscopy, X-ray diffraction, universal testing machine and water contact angle. The results shows that HPMC and gelatin are miscible and the hydrophobic property of the blending film can be enhanced with the introduction of the gelatin. Moreover, the HPMC/gelatin blend films are flexible, and exhibit excellent compatibility, good mechanical properties and also thermal stability, and could be promising candidates for food packaging materials. Introduction In the modern food industry, packaging materials are the "protective umbrella" for most food, keeping it fresh and safe during the manufacture process. This allows the food to be transported for long distances, and provides convenience to consumers. Food packaging materials account for up to 70% of the entire packaging industry field. Traditional food packaging materials are mostly synthetic polymers, commonly known as plastic bags, mainly produced from petroleum cracking products. However, this kind of material is difficult to degrade or may even be non-degradable. In China, the total output of plastic products reached 75.15 million tons in 2017. The annual disposable plastic packaging is about 80 million tons, which induces enormous pressure on the environment and resources. On the other hand, as a non-renewable energy source, the decline in petroleum reserves is increasingly apparent [1]. At the same time, hazardous substances with low molecular weight in synthetic polymer materials may migrate onto the surface of food, which leads to their insecurity as food packaging materials. Therefore, the preparation of films and coating materials from food-grade biopolymers is a common solution in the food packaging field. Recently, the literature has also shown that it has become a major trend to use biodegradable and edible raw materials to replace to traditional synthetic polymer materials in the food industry. The frequently used degradable materials contain polyester-based biomaterials, polyglycolic acid, polycaprolactone and plant-protein-based biomaterials [2]. Among these biodegradable raw materials, cellulose-composite packaging materials have occupied an important position in the field of food packaging due to their extensive Gels 2023, 9, 332 2 of 12 sources, high biocompatibility and non-toxic environmental friendliness. In view of the presence of abundant free hydroxyl groups, cellulose can participate in various reactions to obtain its derivatives with different properties. With the development of nanotechnology, nano-cellulose composite films are emerging [3]. Hydroxypropyl methyl cellulose (HPMC) is a common cellulose derivative with some methyl and hydroxypropyl groups, which promote its water solubility. Therefore, it has been used in the food industry as an emulsifier, preservative, thickener, stabilizer and film-forming material [4]. As a kind of edible plant-based raw material derivative, HPMC can form a transparent, oil-resistant, odorless, tasteless and water-soluble film. The film exhibits good mechanical properties, acts as an effective lipid oxygen barrier and has moderate resistance to water vapor transfer [5,6]. However, due to its high price and poor water vapor barrier properties, composite HPMC films formed with other biopolymers ( such as polyacrylamide or starch and its derivatives), have been utilized in many industrial circles for use in controlled/sustained drug delivery, medical capsules and packaging materials [7][8][9]. Gelatin (Gel) is a partial hydrolysate product of collagen comprising Gly-Pro-Hyp sequences which is normally found in most connective tissue. Gelatin, which derived from living organisms, is widely used in many fields, and can be used in the chemical, pharmacy and food industries due to its high biocompatibility, weak antigenicity, bioactivity and good biodegradability [10]. Because of their excellent elasticity and edible security, gelatin-based films still play an important role in controlled-release film, edible sausage casing and soft capsule coating [11][12][13]. Nevertheless, gelatin film has some disadvantages in food packaging properties; for example, it has low thermostability and is hard and brittle, which limits its applications. In summary, single-material-based packaging films would be inefficient in maintaining the strength and rigidity which are normally required. Earlier studies have concluded that blended film materials based on two or three components display excellent properties. For example, after blending, HPMC can hinder the recrystallization of starch-based material [6,[14][15][16][17]. In addition, a blended film based on collagen and HPMC has also been studied; this research showed that the thermal and morphological properties and mechanical strength of the composite film (collagen/HPMC 1/1) are better than those of pure collagen film. However, so far, there is no research on HPMC and Gel composites, and there is no systematic research about the impact of different ratios between HPMC and gelatin on the structure and properties of HPMC/Gel composite films that can be applied to food packaging materials. The objective of this work was to prepare a novel composite edible film based on HPMC and Gel. The effect of % HPMC and % Gel on the thermal stability, mechanical properties, reaction mechanism, morphology and hydrophilic performance of the composite material was also studied. Figure 1 shows DSC profiles of blended films with two different dry methods. For the most natural semi-crystalline polymers containing both crystalline and amorphous structures, the glass transition temperature Tg and melting temperature Tm can be demonstrated in the DSC thermograms. However, it is difficult to distinguish Tg from the endothermic peak, because the endothermic relaxation is representative for polymeric materials in the glassy state that undergoes natural ageing [18]. Melting is an endothermic process and crystallization is exothermal; the Tm and exothermic peak exhibited at DSC thermograms of different blended films were related to the degree of crystallinity of each sample. In early literature, it has been reported that the blending of HPMC can decrease the melting point of poly(ethylene oxide) (PEO) because of the damage to the crystal structure of PEO through the reaction between the other polymer and PEO [15,19,20]. Similar to with PEO, grafting polymerization between HPMC and Gel also can reduce the denaturation temperature and endothermic enthalpy of pure Gel. According to one report, the dry temperature can Gels 2023, 9, 332 3 of 12 affect the melting temperature of gelatin [21]. Furthermore, the Tm and melting enthalpy is attributed to the helix-coil transition of gelatin [22]. Therefore, the endothermic enthalpy will increase with the increase in the Gel ratio in the blended film in the natural drying conditions (20 • C, Figure 1a). DSC thermograms of different blended films were related to the degree of crystallinity of each sample. In early literature, it has been reported that the blending of HPMC can decrease the melting point of poly(ethylene oxide) (PEO) because of the damage to the crystal structure of PEO through the reaction between the other polymer and PEO [15,19,20]. Similar to with PEO, grafting polymerization between HPMC and Gel also can reduce the denaturation temperature and endothermic enthalpy of pure Gel. According to one report, the dry temperature can affect the melting temperature of gelatin [21]. Furthermore, the Tm and melting enthalpy is attributed to the helix-coil transition of gelatin [22]. Therefore, the endothermic enthalpy will increase with the increase in the Gel ratio in the blended film in the natural drying conditions (20 °C, Figure 1a). Compared with natural drying, the temperature during freeze-drying is lower and the structural damage to the materials is smaller. Comparing Figure 1a,b, it can be seen that the freeze-dried samples have an obvious exothermal peak at a high temperature except for the pure HPMC film, which could be attributed to the cold crystallization of the freeze-dried blended films. With pure protein, different drying methods can affect its thermal stability [23], and the freeze-dried protein exhibited a higher endothermic denaturation peak than the natural-dried sample. Similarly, the freeze-dried HPMC/Gel shows an obvious exothermal peak at a high temperature, and the natural-dried sample exhibits an endothermic peak at a low temperature in our work, which indicates the composite material is more resistant to heat after freeze-drying with the increase in gelatin. Therefore, the freeze drying method is more suitable for packaging films that are used with high-temperature food. TGA TGA determines the mass change as a function of temperature and is always used to test the thermolysis of a sample as well as the residual ash. The TGA curves which characterized the thermal decomposition of all films in N2 are presented in Figure 2. Compared with natural drying, the temperature during freeze-drying is lower and the structural damage to the materials is smaller. Comparing Figure 1a,b, it can be seen that the freeze-dried samples have an obvious exothermal peak at a high temperature except for the pure HPMC film, which could be attributed to the cold crystallization of the freeze-dried blended films. With pure protein, different drying methods can affect its thermal stability [23], and the freeze-dried protein exhibited a higher endothermic denaturation peak than the natural-dried sample. Similarly, the freeze-dried HPMC/Gel shows an obvious exothermal peak at a high temperature, and the natural-dried sample exhibits an endothermic peak at a low temperature in our work, which indicates the composite material is more resistant to heat after freeze-drying with the increase in gelatin. Therefore, the freeze drying method is more suitable for packaging films that are used with high-temperature food. TGA TGA determines the mass change as a function of temperature and is always used to test the thermolysis of a sample as well as the residual ash. The TGA curves which characterized the thermal decomposition of all films in N 2 are presented in Figure 2. As can be seen from the TGA curves in Figure 2, pure HPMC presents two main stages in its degradation pattern. However, before its thermal degradation, a small weight loss at 50 to 140 • C can be observed for the HPMC sample due to the vaporization of moisture. In addition, the first main stage of thermal destruction began at about 160 • C (T onset 1 ), in which the weight loss percentage was 12%, and the rate of maximum weight loss occurred at 233 • C. At this temperature (160 • C), the plasticizer (glycerol) began to volatilize. The following stage occurred between 350 and 480 • C, and this thermal weight loss was mainly put down to the breakage of the cellulose ether bonds, which included the simultaneous processes of chain-scission and demethoxylation. Compared with the literature, the TGA curve of pure HMPC in our experiment has different thermal degradation behavior, mainly because of the different degree of substitution of HPMC and the different preparation methods of HPMC blended film [24]. During this experiment, part of the hydroxypropyl and methyl groups of HPMC may have broken away from main chain in the process of heating. As can be seen from the TGA curves in Figure 2, pure HPMC presents two main stages in its degradation pattern. However, before its thermal degradation, a small weight loss at 50 to 140 °C can be observed for the HPMC sample due to the vaporization of moisture. In addition, the first main stage of thermal destruction began at about 160 °C (Tonset 1), in which the weight loss percentage was 12%, and the rate of maximum weight loss occurred at 233 °C. At this temperature (160 °C), the plasticizer (glycerol) began to volatilize. The following stage occurred between 350 and 480 °C, and this thermal weight loss was mainly put down to the breakage of the cellulose ether bonds, which included the simultaneous processes of chain-scission and demethoxylation. Compared with the literature, the TGA curve of pure HMPC in our experiment has different thermal degradation behavior, mainly because of the different degree of substitution of HPMC and the different preparation methods of HPMC blended film [24]. During this experiment, part of the hydroxypropyl and methyl groups of HPMC may have broken away from main chain in the process of heating. For the Gel film, the thermal degradation is mainly divided into four steps. For the pure Gel film, the first stage of thermal decomposition occurred between 50 °C and 140 °C. The weight loss ratio was approximately 12%, and this loss was caused by water loss in the gelatin film structure. It can be seen that the fastest pyrolysis temperature was 145 °C for the second step and the Gel film lost about 11% of its mass. Similar to with pure HPMC, the weight loss in this stage is deemed to be the thermal decomposition of the glycerol. The following two steps have no clear boundary; and the maximal degradation temperatures are 283 °C and 359 °C, respectively, and the weight-loss percentage is about 60%. The weight loss is caused by the thermal decomposition of the gelatin polymer chain [25,26]. Compared with the Gel film, the HPMC has a higher thermal stability. Therefore, the thermal degradation of the blended film is in between that of pure HPMC and Gel films. Generally, for polymers, the decomposition temperature when the weight loss is 50% is called the semi-life temperature. A higher semi-life temperature indicates that the polymer has a better thermal stability. According to Figure 2, the semi-life temperature of the blended film will increase with the increase in HMPC. These results showed that the addition of HPMC can increase the thermal stability of Gel film. For the Gel film, the thermal degradation is mainly divided into four steps. For the pure Gel film, the first stage of thermal decomposition occurred between 50 • C and 140 • C. The weight loss ratio was approximately 12%, and this loss was caused by water loss in the gelatin film structure. It can be seen that the fastest pyrolysis temperature was 145 • C for the second step and the Gel film lost about 11% of its mass. Similar to with pure HPMC, the weight loss in this stage is deemed to be the thermal decomposition of the glycerol. The following two steps have no clear boundary; and the maximal degradation temperatures are 283 • C and 359 • C, respectively, and the weight-loss percentage is about 60%. The weight loss is caused by the thermal decomposition of the gelatin polymer chain [25,26]. XRD Analysis Compared with the Gel film, the HPMC has a higher thermal stability. Therefore, the thermal degradation of the blended film is in between that of pure HPMC and Gel films. Generally, for polymers, the decomposition temperature when the weight loss is 50% is called the semi-life temperature. A higher semi-life temperature indicates that the polymer has a better thermal stability. According to Figure 2, the semi-life temperature of the blended film will increase with the increase in HMPC. These results showed that the addition of HPMC can increase the thermal stability of Gel film. XRD Analysis The crystal texture of pure and blended HPMC films was also investigated by X-ray diffraction (Figure 3). For the pure HPMC film sample, it exhibited a largely amorphous with a broad undefined peak at about 20 • , and two weak peaks at around 7.8 • and 14.5 • . According to the previous research, there are two peaks at around 7.8 • and 20 • (2θ) for pure HPMC film [27]. In our diffraction pattern, a new peak at 14.5 • for the HPMC film sample should be attributed to the matrix formed between HPMC and the plasticizer (glycerol), which is also in good agreement with the results from literature [28]. Research has shown that collagen exhibited an obvious peak at around 7.5 • (2θ), corresponding to the diameter of the triple helix structure, and a diffuse broad peak at about 20 • (2θ), corresponding to the interval between amino acid residues along the helix of collagen [29,30]. As the hydrolysate of collagen, the pure Gel film exhibited a sharp peak at 11.4 • (2θ) and 20.5 • (2θ) for the naturally dried sample, and 7.0 • (2θ) and 20.2 • (2θ) for the freeze-dried sample, and three unresolved peaks at 28~35 • (2θ) for the naturally dried Gel film. Therefore, the crystal form of polymer materials can be maintained by the freeze-drying method. As the ratio of Gel decreased in the blended film, the typical peak at 20 • (2θ) became weakened, and the peak at around 12 • (2θ) gradually faded for the naturally dried film. For the freeze-dried samples, the characteristic diffraction peak of Gel at 7.0 • is weakened, and even disappeared for the sample HPMC/Gel 8/2. It is Gels 2023, 9, 332 5 of 12 worth noting that regardless of the type of drying method, the crystallinity degree was found to be best in the HPMC/Gel 5/5 sample. (2θ) and 20.5° (2θ) for the naturally dried sample, and 7.0° (2θ) and 20.2° (2θ) for the freeze-dried sample, and three unresolved peaks at 28~35° (2θ) for the naturally dried Gel film. Therefore, the crystal form of polymer materials can be maintained by the freeze-drying method. As the ratio of Gel decreased in the blended film, the typical peak at 20° (2θ) became weakened, and the peak at around 12° (2θ) gradually faded for the naturally dried film. For the freeze-dried samples, the characteristic diffraction peak of Gel at 7.0° is weakened, and even disappeared for the sample HPMC/Gel 8/2. It is worth noting that regardless of the type of drying method, the crystallinity degree was found to be best in the HPMC/Gel 5/5 sample. FT-IR of the Films Generally, FTIR test is a method that is sensitive to the structure and local molecular environment of polymers. In addition, the FT-IR spectra of all composite film with different HPMC and gelatin ratios exhibited similar characteristic peaks with different amplitudes, depending on the percentage of gelatin (Figure 4). The band situated near 3280 cm −1 was discovered in all specimens, corresponding to the stretching vibration of the hydroxyl groups (-O-H) existing in the molecule of both HPMC and gelatin. It is generally known that the characteristic absorption peaks of hydroxyl groups will shift to a lower frequency because of stretching vibration, especially when intermolecular and intramolecular hydrogen bonds linkages are formed [31]. As seen in Figure 4, the stretching vibration of the hydroxyl groups (-O-H) in the composite film moved from 3372 cm −1 to 3280 cm −1 due to the addition of gelatin, indicating the reaction between HPMC and gelatin molecules. For the pure HPMC film, the characteristic absorption bands were at 3460 cm −1 and 1040 cm −1 resulting from the stretching vibration of the hydroxyl group (O-H) and ether bond (C-O) groups, respectively [8]. Compared with pure HPMC membrane, the new peaks situated around 1547 cm −1 and 1240 cm −1 can be detected for the blended film, which was the characteristic peak of the amide II band (corresponding FT-IR of the Films Generally, FTIR test is a method that is sensitive to the structure and local molecular environment of polymers. In addition, the FT-IR spectra of all composite film with different HPMC and gelatin ratios exhibited similar characteristic peaks with different amplitudes, depending on the percentage of gelatin (Figure 4). The band situated near 3280 cm −1 was discovered in all specimens, corresponding to the stretching vibration of the hydroxyl groups (-O-H) existing in the molecule of both HPMC and gelatin. It is generally known that the characteristic absorption peaks of hydroxyl groups will shift to a lower frequency because of stretching vibration, especially when intermolecular and intramolecular hydrogen bonds linkages are formed [31]. As seen in Figure 4, the stretching vibration of the hydroxyl groups (-O-H) in the composite film moved from 3372 cm −1 to 3280 cm −1 due to the addition of gelatin, indicating the reaction between HPMC and gelatin molecules. For the pure HPMC film, the characteristic absorption bands were at 3460 cm −1 and 1040 cm −1 resulting from the stretching vibration of the hydroxyl group (O-H) and ether bond (C-O) groups, respectively [8]. Compared with pure HPMC membrane, the new peaks situated around 1547 cm −1 and 1240 cm −1 can be detected for the blended film, which was the characteristic peak of the amide II band (corresponding to the bending vibration of N-H) and amide III band (corresponding to the stretching vibration of C-N). It can be attributed to the peptide bond of the gelatin. Interestingly, the two-peak intensity is enhanced with the increase in the gelatin ratio in the composite film, which means the homogeneous recombination between HPMC and gelatin. Besides this, the peaks observed at around 1640 cm −1 indicate the C-O of the six carbon cyclic of HPMC, and phenylalanine or tyrosine of gelatin [32]. As the gelatin content increases, the intensity of the peaks near 1040 cm −1 weakens, and the intensity at around 1645 cm −1 increases obviously, which implies that HPMC and gelatin can be linked through the intermolecular hydrogen bonds in the composite film. Generally, if the compatibility between components in the composite material is poor, each polymer component shows its specific peak positions in the blended films. Conversely, there will be shifts in the wavelength due to the chemical interactions between different constituents if the polymers are miscible. In this research, there are significant differences between the spectra of single components and HPMC/Gel films, which is evidence of the miscibility of HPMC and Gel blends. termolecular hydrogen bonds in the composite film. Generally, if the compatibility be-tween components in the composite material is poor, each polymer component shows its specific peak positions in the blended films. Conversely, there will be shifts in the wavelength due to the chemical interactions between different constituents if the polymers are miscible. In this research, there are significant differences between the spectra of single components and HPMC/Gel films, which is evidence of the miscibility of HPMC and Gel blends. Mechanical Properties The tensile strength and elongation at break of packing materials have a significant impact on their practical applications. The research of Fan et al. showed that the chemical reaction among different polymers' components have an obvious influence on the mechanical properties of the composite polymer [33]. The effect of increases in the proportion of the Gel on the ultimate tensile strength and elongation at break of the HPMC/Gel composite films is shown in Table 1. From the data in Table 1, it can be seen that the increase in the Gel content significantly increased the ultimate tensile strength and decreased the elongation at break of the blended films, except for sample 5 whose tensile strength was 26.76 MPa and Gel content was 60%. Mechanical Properties The tensile strength and elongation at break of packing materials have a significant impact on their practical applications. The research of Fan et al. showed that the chemical reaction among different polymers' components have an obvious influence on the mechanical properties of the composite polymer [33]. The effect of increases in the proportion of the Gel on the ultimate tensile strength and elongation at break of the HPMC/Gel composite films is shown in Table 1. From the data in Table 1, it can be seen that the increase in the Gel content significantly increased the ultimate tensile strength and decreased the elongation at break of the blended films, except for sample 5 whose tensile strength was 26.76 MPa and Gel content was 60%. By reducing intermolecular forces, increasing the mobility of polymer chains and improving their flexibility, the addition of plasticizer can effectively reduce the inherent brittleness of composite films [5]. Increased crystallinity in the blended films contributes to the increased tensile strength. Compared with the commercial plastic packaging film HDPE (whose tensile strength is 25 ± 2 MPa), the tensile strength of HPMC/Gel composite film is better [34]. Overall, mechanical properties depend on the microstructural network, constituents, matrix filler interactions, preparation conditions, plasticizer and existing intermolecular force [35]. The changes in the mechanical properties can be attributed to the interaction of characteristics among the polymer components and rearrangement of the polymer network. SEM The microstructures of the longitudinal section of freeze-dried films detected by scanning electron microscope are shown in Figure 5. The longitudinal section of the pure Gel film showed an excellent homogeneous and compact structure compared with the blended and pure HPMC films, exhibiting a good property of film-forming [36]. In addition, the surfaces of all the HPMC and Gel blended films are uniform, meaning that the blended films did not show any microscopic phase separation. Two components in the composite film have abundant hydrophilic groups and could be linked by the chemical bonding between gelatin or HPMC and glycerol molecules. The results suggested that the HPMC and Gel have a good compatibility. HDPE (whose tensile strength is 25 ± 2 MPa), the tensile strength of HPMC/Gel composite film is better [34]. Overall, mechanical properties depend on the microstructural network, constituents, matrix filler interactions, preparation conditions, plasticizer and existing intermolecular force [35]. The changes in the mechanical properties can be attributed to the interaction of characteristics among the polymer components and rearrangement of the polymer network. SEM The microstructures of the longitudinal section of freeze-dried films detected by scanning electron microscope are shown in Figure 5. The longitudinal section of the pure Gel film showed an excellent homogeneous and compact structure compared with the blended and pure HPMC films, exhibiting a good property of film-forming [36]. In addition, the surfaces of all the HPMC and Gel blended films are uniform, meaning that the blended films did not show any microscopic phase separation. Two components in the composite film have abundant hydrophilic groups and could be linked by the chemical bonding between gelatin or HPMC and glycerol molecules. The results suggested that the HPMC and Gel have a good compatibility. Water Contact Angle The measurement of the water contact angle (WCA) provided evidence of the nature of the surface and the microstructure of the composite film, and the hydrophobicity and hydrophilicity of HPMC-based films ( Table 2). The Gel film exhibited a hydrophobic nature characterized by a high WCA, i.e., 101.23 ± 0.13, whereas the HPMC film had a Water Contact Angle The measurement of the water contact angle (WCA) provided evidence of the nature of the surface and the microstructure of the composite film, and the hydrophobicity and hydrophilicity of HPMC-based films ( Table 2). The Gel film exhibited a hydrophobic nature characterized by a high WCA, i.e., 101.23 ± 0.13, whereas the HPMC film had a lower WCA value of 52.01 ± 0.63, which means the hydrophobicity of gelatin was much better than that of HPMC film. Similar conclusions have been summarized by Ding et al. [4]. The distribution and amount of hydrophilic groups on the surface of the polymer materials were the primary factors altering the WCA. As expected, the hydrophobicity of the HPMC/Gel composite membrane was visibly enhanced due to the introduction of hydrophobic groups from Gel, such as indolyl and phenyl groups [37]. Accordingly, the WCA of the blended films gradually increased with the increase in the proportion of the Gel constituent. This phenomenon, which reduces the hydrophilic property of the composite films, can be ascribed to the formation of Schiff's base depleting the hydrophilic groups on the surface of the blending membrane [38]. A higher WCA represents a more effective resistance to water drop penetration [39]. Good hydrophilicity means that the water in food is more easily lost during storage, which then results in changes in taste and texture. Conclusions Gelatin-HPMC blended films were successfully produced by the graft copolymerization method in aqueous solution. The XRD and SEM images show that the Gel/HPMC blended film (5/5) has a good crystallinity degree, homogeneous size distribution and smooth structures. According to the DSC and TGA results, it can be seen that the addition of gelatin could significantly improve the thermal stability of HPMC. The chemical structure of HPMC slightly changed after blending with gelatin. Meanwhile, the hydrophobic property of HPMC material can be improved with the addition of gelatin, suggesting that HPMC/Gel blended films have the potential to substitute traditional paper-plastic package materials. In addition, the mechanical properties of the blended films were influenced by the gelatin content, and the Gel/HPMC blended film (5/5) had a good tensile strength. These results show that the HPMC/Gel blended film material could become a novel material with potential applications, especially in the food packaging field, because of its excellent properties such as low cost, green initiative and being easy to apply. Materials A commercially available, food-grade HPMC (average Mn~86,000, 1.8~2.0 mol methoxy per mol cellulose, Fisher Scientific Chemicals Company, Pittsburgh, PA, USA) was used in this work. Gelatin (type A, from pigskin) and glycerol (analytically pure) were purchased from Sigma-Aldrich Chemicals Company (St. Louis, MO, USA). All other chemicals used were analytically pure and obtained from Fisher Scientific Co. (Pittsburgh, PA, USA), without further purification. Preparation of the HPMC/Gel Composite Material The HPMC and gelatin were mixed in a three-necked bottle, and the HPMC:Gel weight ratios were 10:0, 8:2, 6:4, 5:5, 4:6, 2:8 and 0:10. The required amount of water was added into the bottle to ensure the mass fraction was 10%. The three-necked bottle was stirred for 12 h at 25 • C. The pH of the mixture was adjusted to 6.8, and glycerol (20%, based on polymer total weight percentage) was gradually added dropwise into the bottle with gentle stirring, and then the temperature was increased to 85 • C for 2 h in order to obtain a homogeneous gelatin solution. Following this, the mixed solution was cooled to 25 • C to dissolve the HPMC with stirring at 100 rpm for 1 h. After the reaction was completed, 15 g of the solution was poured onto a rounded dish with a diameter of 10 cm, and dried at room temperature to obtain the film. The blended films were prepared by pouring into a petri dish and freeze-drying at −50 • C for 48 h. X-ray Diffraction Measurements In order to ascertain the crystalline nature of the prepared composite materials, the X-ray diffraction studies were carried out on a Bruker D8 advance X-ray diffraction meter. The samples were scanned in the 2θ range from 5 • to 60 • . The scanning speed and step size were 0.3 • /min and 0.01 • , respectively. The operating target voltage was 40 kV, tube current was 100 mA and the radiation used was with Ni-filtered CuK α radiation of wavelength λ = 1.5406 Å. The intensity versus scans was obtained for different samples. Fourier Transform Infrared Spectroscopy FTIR spectra of all films were collected with a Nicole 6700 FT-IR Spectrometer (Thermo Fisher Scientific Co., Waltham, MA, USA). A total of 64 scans were run for every sample in the wave number range from 400 to 4000 cm −1 with resolution 4 cm −1 at 25 • C. The FTIR spectrum was taken in a transmittance mode. Thermogravimetric Analysis The thermal behavior of the scaffold materials was studied using a Perkin-Elmer thermal gravimetric analyzer (Pyris 1, Norwalk, CT, USA). The weight of every sample was about 5 mg, and all measurements were taken under a dry N 2 atmosphere at 60 mL/min. All films were heated up to 700 • C from 50 • C with a heating rate of 20 • C/min. Differential Scanning Calorimetry The thermal properties were performed using a German NETZSCH 204DSC under nitrogen purge. Approximate 4 mg samples were accurately weighted and sealed hermetically in a high-volume stainless steel pan. Samples were heated from 30 • C to 250 • C with a heating rate of 10 • C/min. The area of the peak was determined in the DSC thermograms. Every sample measurement was performed in triplicate. Mechanical Properties The tensile strength (TS) and elongation at break of the all strip-shaped samples were determined to evaluate the mechanical properties using a CMT4204 Tensile Machine (MTS Industrial Systems Co. LTD, Eden Prairie, MN, USA). First of all, the samples were cut into rectangles of 5 mm wide and 100 mm long using a specific mold. Then, the stripshaped samples were equilibrated in a 25 • C dryer with 65% relative humidity for 24 h. Subsequently, each specimen of length 50 mm was gripped, and then the samples were uniaxially stretched at a constant speed of 2 inch/min in the vertical direction. The tensile strength and elongation at break were recorded. The measurement was repeated three times for every sample to calculate the average value. Scanning Electron Microscopy The morphology of the freeze-dried HPMC/Gel scaffolds was observed by scanning electron microscopy (SEM, Hitachi S4800, Hitachi, Ltd., Tokyo, Japan). In brief, all the
v3-fos-license
2014-10-01T00:00:00.000Z
1991-11-01T00:00:00.000
15456817
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "pd", "oa_status": "GOLD", "oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.919571", "pdf_hash": "dfba7c92b0ecea764145cb90f1d709094739eef7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1313", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "dfba7c92b0ecea764145cb90f1d709094739eef7", "year": 1991 }
pes2o/s2orc
Epidemiologic approaches for assessing health risks from complex mixtures in indoor air. Indoor air may be contaminated by diverse gaseous and particulate pollutants that may adversely affect health. As a basis for controlling adverse health effects of indoor air pollution, the presence of a hazard needs to be confirmed, and the quantitative relationship between exposure and response needs to be described. Toxicological, clinical, and epidemiological studies represent complementary approaches for obtaining the requisite evidence. The assessment of the effects of complex mixtures poses a difficult challenge for epidemiologists. Understanding the effects of exposure may require accurate assessment of concentrations and personal exposures to multiple agents and analytical approaches that can identify independent effects of single agents and the synergistic or antagonistic effects that may occur in mixtures. The array of epidemiological study designs for this task includes descriptive studies, cohort studies, and case-control studies, each having potential advantages and disadvantages for studying complex mixtures. This presentation considers issues related to exposure assessment and study design for addressing the effects of complex mixtures in indoor air. Introduction Indoor air in residential and nonresidential structures is typically contaminated by a complex mixture of gaseous and particulate pollutants. The sources are diverse and include building occupants and their activities, combustion, building materials and furnishings, biological agents, and entry of contaminated outdoor air and soil gas (1,2). The air of a home might contain nitrogen dioxide (NO2) from unvented emissions from a gas stove or space heater, respirable particles from cigarette smoking, cooking, occupant activities, and outdoor air, formaldehyde from furnishings and plywood, tetrachloroethylene from recendy drycleaned clothes, and allergens from a family cat. The potential health effects of indoor air pollution are equally diverse, spanning from short-term annoyance and discomfort to permanent disability, cancer, and even death. Although the complexity of indoor air pollution is well recognized, most epidemiological studies ofindoor air pollution and health have focused on the effects of single pollutants, e.g., NO2, environmental tobacco smoke, and formaldehyde, or a single outcome measure in relation to several exposures, e.g., respiratory symptoms in children, NO2, and environmental tobacco smoke. The restricted focus undoubtedly reflects, in part, the difficulty ofaccurately estimating personal exposures to multiple pollutants and multiple health outcomes. Furthermore, control strategies have tended to emphasize single pollu-tants and sources. However, even studies directed at a single pollutant inherently examine the effect of that pollutant on a background of exposure to other pollutants. Nevertheless, a full understanding of the health effects of indoor air pollution will require information on effects ofpollutant mixtures. This paper considers the epidemiological approaches applicable to studying the effects of multicomponent mixtures in indoor air. Relevant study designs and potential limitations are reviewed, as are approaches for exposure assessment and analytical approaches for assessing the effects of multiple exposures. Concepts of Interaction Studies of complex mixtures need to be designed with consideration of the potential patterns of combined effects of the component pollutants. The biological effect ofone pollutant may be modified by the presence of other pollutants; this phenomenon, termed "effect modification" by epidemiologists, is more generally refered to as "interaction." Interactions may be synergistic (the effect ofan exposure is increased by the presence of another factor) or antagonistic (the effect of an exposure is reduced bythepresenceofanotherfactor). Interaction is assessed with statistical modeling approaches; for the purpose ofpublic health protection, synergism is considered to be present if the combined effect ofthe multiple factors exceeds that expected on the basis ofadditivity ofthe independent effects (3). The results ofstatistical modeling or interaction should be interpreted with consideration of the measurement scale (additive or multiplicative) inherent in the selected model and of the limited statistical power of such analyses. Interactions may reflect diverse biological phenomena ( Table 1). For example, the effect of radon in causing lung cancer in nonsmokers might be modified by the presence of respirable particles generated by tobacco smoking. The increased concentrations of respirable particles tend to increase concentrations of radon progeny in inhaled air; the particles also alter the deposition ofradon progeny in the airways ofthe lung. Thus, in this example, passive smoking not only affects exposure to radon progeny, but alters exposure-dose relations in the respiratory tract. For respiratory infection in children, the effects ofexposures to NO2 and environmental tobacco smoking might be additive; both agents potentially affect the host defense mechanisms against inhaled pathogens. Molhave (4), in discussing the sick-building syndrome, emphasizes the potential role of interactions among indoor air pollutants and other factors determining comfort and symptom responses of building occupants. A wide range of physical and biological interactions can be postulated. For example, increased temperature in a space may directly affect occupants by reducing thermal comfort and indirectly affect occupants by increasing emissions of formaldehyde and other volatile organic compounds. Few generalizations can be offered concerning the likely directions or magnitudes of interaction among the components of complex, multicomponent mixtures. In a multistep disease process, agents acting at the same step tend to have a combined effect that is additive, whereas agents acting positively at different steps tend to have a combined effect that is multiplicative (5). However, the potential range of mechanisms of interaction among indoor air pollutants and other factors determining responses to indoor environments is broad, extending from physical interactions influencing exposure to interactions at the most proximal sites of disease causation. Exposure Assessment Evidence for interaction may be gained from appropriately designed experiments, including animal exposures or other types of toxicological investigation, controlled human exposures to mixtures, and epidemiological studies. To provide insight into patterns ofinteractions among pollutants, an epidemiological investigation needs to incorporate accurate estimates ofexposure to the relevant pollutants and other factors. Personal exposure refers to the air pollutant exposures experienced by an individual as the individual moves through various environmental settings. Thus, the link between the presence of a chemical contaminant in the environment and its contact with humans is complex, and in large part determined by patterns ofhuman behavior. The portion ofexposure that is adsorbed, ingested, or inhaled into the body is termed the "dose." The definition ofdose can be refined further by introducing the concept of "biologically effective dose," referring to the quantity of material actually reaching the site of toxic action. In many studies ofair pollution and health, personal exposures to ambient pollutants were inferred from air pollution monitors sited in central locations and exposures to indoor pollutants assigned on the basis of the presence of sources, such as gas stoves or cigarette smoking. However, both ofthese approaches may introduce substantial misclassification ofactual personal exposures. New personal monitoring instrumentation, which is small and unobtrusive, has recently been developed (6). The measurements from this new generation ofmonitors have clearly demonstrated the inaccuracy ofbasing estimates ofpersonal exposures in indoor and transit environments on measurements made at outdoor sites. Techniques for assessing personal exposure to air pollution can be divided into two major classes. The first approach measures the concentrations of the pollutant using monitors worn on the person or located in specific settings frequented by the person (i.e., home, workplace, or car), and the second estimates exposure from measurements of biological markers such as the pollutant concentrations in blood and breath samples. For example, in an investigation in Albuquerque, New Mexico (7), personal exposures of infants to NO2 were directly measured by placing a sampler on the child. Personal exposures were also estimated by monitoring NO2 concentrations in the rooms ofthe homes and then calculating an average exposure by weighing the concentrations with the time spent in each room. Biological markers of exposure are now available for many pollutants including tobacco smoke, carbon monoxide, some allergens, and various volatile organic compounds. In studying the effects ofexposure to a multicomponent mixture, the sampling strategy should provide estimates ofpersonal exposure to the component pollutants considered relevant to the health outcome. The monitoring task is potentially large and expensive; strategies that incorporate more intensive monitoring for a sample of the study population have been recommended (8). Epidemiological Study Designs The health effects of multicomponent mixtures can be investigated using conventional epidemiological study designs: the cross-sectional study, the cohort study, and the case-control study. Each study design has potential advantages and disadvantages, depending on the exposures and health outcomes of concern. In addition to selecting a study design, an investigator needs to specify the approach to studying the effects ofa mixture. The alternative strategies are diverse. The range ofexposures can be restricted to minimnize the possible interactions. For example, we are conducting a longitudinal study of respiratory infections in infants and NO2 exposure; households with any adult smokers are excluded. This strategy has the advantage of simplifying assessment ofthe independent effect ofan exposure but does not provide information on combined exposures that may be experienced by broad segments ofthe population. For some mixtures, it may be possible to identify a surrogate for the overall degree of pollution; for example, the concentration of total volatile organic compounds might serve as an exposure measure in studying the sick-building syndrome. Ifemphasis is to be placed on characterizing interactions, then balancing the distribution of the study population among the various exposure groups improves efficiency. Cross-Sectional Studies In a cross-sectional study, often termed a "survey," observations concerning health status and exposure are made at a single point in time. The cross-sectional approach is most appropriate for exposures having acute rather than chronic effects and for exposures that can be presumed to have remained stable over time. It is not appropriate for studying the effects of rapidly changing mixtures nor for studying diseases that occur only after a long period between onset of exposure and incidence, e.g., cancer. This design has the advantages of feasibility and generally manageable costs and of permitting intensive monitoring of a number ofpollutants at the time of study. For example, the crosssectional approach has been widely used to investigate indoor air pollution and respiratory symptoms and lung function in children (1 ); outbreaks of building-related illness have also been investigated with this approach (9). Disadvantages include the potential for bias introduced by the tendency ofpersons adversely affected by exposure to be underrepresented in the study population and the limitations of cross-sectional data for describing longitudinal relationships between exposure and disease. Cohort Studies In cohort studies, subjects are selected on the basis of exposure status and followed over time for the development of disease. Cohort studies can be conducted prospectively or retrospectively. In a prospective cohort study, subjects are enrolled and then observed into the future, whereas in a retrospective cohort study, historical information is used to describe exposures and the occurrence of disease following entry into the cohort. The cohort design is particularly advantageous for assessing the effects of rare exposures. For studies directed at complex multicomponent mixtures, the prospective cohort approach facilitates careful exposure assessment through the opportunity to prospectively plan and implement an optimal monitoring program. Similarly, longitudinal observations of health outcomes, such as respiratory symptoms or lunq function level can be made. Thus, a prospective cohort study of brief duration represents an appropriate design for exposures and health outcomes that vary on a shortterm basis. For example, Lebowitz and colleagues (10) obtained daily measurements of peak expiratory flow rate (PEFR), a measure of lung function, in subjects with asthma and assessed the relationship between daily variation of PEFR and exposures to indoor and outdoor air pollutants. The cohort design has the disadvantages of potentially high costs and of difficulty in maintaining followup of the study population. For health outcomes that occur infrequently, large numbers of subjects may be needed to attain adequate statistical power, particularly if the investigation is designed to assess interaction. Case-Control Studies The case-control design involves the identification ofpersons ("cases") with the health outcome of interest and a control series ofpersons without the disease who potentially would be selected as cases if they were to develop the disease. The exposure histories ofthe cases and controls are ascertained and compared to estimate the risk of disease associated with exposure. Casecontrol studies are particularly appropriate for investigating infrequent diseases or diseases that may follow a lengthy period of exposure. Hybrid designs that "nest" case-control studies within cohort studies offer an efficient approach for characterizing exposure-disease relationships (11). The case-control design has been widely applied to investigating lung cancer and exposure to environmental tobacco smoke and to radon. Cohort designs are generally not practicable for lung cancer and these indoor pollutants. The case-control design has been used infrequently, however, for studying other indoor air pollutants and the effects of complex mixtures. The potential disadvantages include information bias, which may tend to increase or decrease associations, and selection bias, which occurs if methods for case and control selection affect the true relation between exposure and disease. Assessment of Interaction Analytical Approaches The assessment ofinteraction has been a subject ofcontroversy in the epidemiological literature (3,5); the debate has been both semantic and conceptual. Nevertheless, some accord has been reached with regard to analytical methods and the interpretation of analyses directed at interaction. Interaction is assessed by selecting a measurement scale on which to compare the individual and the combined effects ofthe multiple risk factors; available methods exclusively address the case of two interacting risk factors. Generally, the relative risk is the measure ofeffect used to assess interaction. On the additive scale, the combined effect is compared to the sum of the individual relative risks less unity. Ifthe difference is zero, then interaction is not present. Positive differences represent synergism, whereas negative differences represent antagonism. On the multiplicative scale, the combined effect ofthe agents is compared to the product ofthe two relative risk estimates. Analysis methods have also been developed that flexibly fltthe data on a continuum from less than additive to more than multiplicative (11,12). The presence and degree of interaction depends on the measurement scale selected. A positive and hence synergistic interaction on an additive scale may be negative and hence antagonistic on a multiplicative scale. Because of this scale dependence in assessing interaction, the additive scale has been selected as most appropriate for determining interaction ofpublic health significance (13). In practice, interaction is generally assessed by adding product terms of the potentially interacting variables to a model that already includes individual variables for the factors. The coefficient for the product term describes the direction and magnitude of the interaction; the statistical significance of the coefficient can be tested on the null hypothesis of no interaction. Other measures of synergy have been proposed (13). With regard to addressing complex, multicomponent mixtures in indoor air, applying these methods requires estimates of exposure or dose for the pollutants ofconcern. Methods for modeling beyond two independent factors have not been well developed, and additional limitations (see below) must be considered. Barriers in Assessing Interaction Misclassification and Confounding. Estimates of personal exposures to air pollutants, both in outdoor and indoor air, are subject to some degree ofmisclassification, potentially both random and nonrandom in relation to health status (14,15). Random misclassification tends to bias measures ofassociation toward the null value (14). Both empiric and theoretical analyses indicate the potential for a strong bias toward the null (16,17). In assessing interaction, random misclassification would also tend to bias toward the null, whereas the consequences of nonrandom misclassification may be to increase or decrease effects. Confounding refers to bias introduced by association between the risk factor of interest and another risk factor for the health outcome under investigation. The presence ofuncontrolled confounding could potentially have complex consequences in assessing interaction, depending both on the direction ofconfounding and the pattern of interaction, synergistic or antagonistic. Statistical Power. The statistical power of the usual methods for assessing interaction is limited (18). P1wer may be further compromised by misclassification ofthe estimates ofthe interacting exposures. Thus, failure to find statistically significant interaction does not exclude the presence ofa significant degree of interaction, either from the biological or the public health perspectives. Confidence intervals for the parameters estimating interaction describe the range of interaction compatible with the data. Model Specification. In assessing interaction, statistical models are used to represent potentially complex biological phenomena that may be incompletely characterized. Modeling approaches are determined largely by the availability ofstatistical software; most models inherently assume either an additive or a multiplicative scale for describing interaction. To the extent possible, models should be developed to be reflective of the underlying biological process, rather than chosen on the basis of convenience in modeling and the availability of software. Flexible modeling strategies have been developed that do not require the direct specification of the model as additive or multiplicative (10,11). These approaches, however, also suffer from limited power in determining the pattern ofinteraction and should not replace a priori model specification on a biological basis. Conclusions A full understanding ofthe health effects of indoor air pollution will require the development of information on effects of pollutant mixtures. The usual epidemiological study designs can be used for this purpose, but the choice of design strategy merits particular consideration if interactions among pollutants are the focus of investigation. To provide insight into patterns of interactions among pollutants, an epidemiological investigation needs to incorporate accurate estimates ofpersonal exposure to the relevant pollutants and other factors. Analytical methods are available for assessing interaction among pollutants, but in applying these methods, the limits posed by adequacy of statistical power and the biological relevance of the assumed statistical model need to be addressed.
v3-fos-license
2024-05-09T06:16:34.921Z
2024-05-08T00:00:00.000
269624485
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1172/jci.insight.170210", "pdf_hash": "6141bbc1055a116f1b1fa6a1f1b14c743fc747af", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1314", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "sha1": "9312de697bba2606ce52f29ae577e8b89c161b62", "year": 2024 }
pes2o/s2orc
Immune responses associated with protection induced by chemoattenuated PfSPZ vaccine in malaria-naive Europeans Vaccination of malaria-naive volunteers with a high dose of Plasmodium falciparum sporozoites chemoattenuated by chloroquine (CQ) (PfSPZ-CVac [CQ]) has previously demonstrated full protection against controlled human malaria infection (CHMI). However, lower doses of PfSPZ-CVac [CQ] resulted in incomplete protection. This provides the opportunity to understand the immune mechanisms needed for better vaccine-induced protection by comparing individuals who were protected with those not protected. Using mass cytometry, we characterized immune cell composition and responses of malaria-naive European volunteers who received either lower doses of PfSPZ-CVac [CQ], resulting in 50% protection irrespective of the dose, or a placebo vaccination, with everyone becoming infected following CHMI. Clusters of CD4+ and γδ T cells associated with protection were identified, consistent with their known role in malaria immunity. Additionally, EMRA CD8+ T cells and CD56+CD8+ T cell clusters were associated with protection. In a cohort from a malaria-endemic area in Gabon, these CD8+ T cell clusters were also associated with parasitemia control in individuals with lifelong exposure to malaria. Upon stimulation with P. falciparum–infected erythrocytes, CD4+, γδ, and EMRA CD8+ T cells produced IFN-γ and/or TNF, indicating their ability to mediate responses that eliminate malaria parasites. Introduction Despite the significant efforts and investments to complement the current control strategies with an effective vaccine (1), malaria remains a major global health problem, with 249 million cases and an estimated 608,000 deaths in 2022 (2).Although promising results have been reported with the R21/Matrix-M malaria vaccine (2,3), until recently, the only malaria vaccine recommended by the WHO was RTS'S, which provides partial protection of between 18% and 28% over 4 years in young children without booster vaccination (4,5).Recently, inoculation with Plasmodium falciparum sporozoites (SPZs) chemoattenuated with chloroquine (CQ) (PfSPZ-CVac [CQ]) was shown to induce dose-dependent protection that ranged between 33% and 100%, against P. falciparum after controlled human malaria infection (CHMI) with PfSPZ (6)(7)(8).Vaccine efficacy was associated with a significant increase in polyfunctional memory CD4 + T cells in response to SPZ or infected red blood cell antigens, as well as in circulating γδ T cells identified by flow cytometry (6), while the frequency of detectable antigen-specific CD8 + T cells was low (6).These results are consistent with previous flow cytometry studies that have linked human CD4 + and γδ T cells to the control of parasitemia in protected individuals, while the CD8 + T cells response could not be robustly captured (9,10).However, studies in nonhuman primates vaccinated with attenuated Vaccination of malaria-naive volunteers with a high dose of Plasmodium falciparum sporozoites chemoattenuated by chloroquine (CQ) (PfSPZ-CVac [CQ]) has previously demonstrated full protection against controlled human malaria infection (CHMI).However, lower doses of PfSPZ-CVac [CQ] resulted in incomplete protection.This provides the opportunity to understand the immune mechanisms needed for better vaccine-induced protection by comparing individuals who were protected with those not protected.Using mass cytometry, we characterized immune cell composition and responses of malaria-naive European volunteers who received either lower doses of PfSPZ-CVac [CQ], resulting in 50% protection irrespective of the dose, or a placebo vaccination, with everyone becoming infected following CHMI.Clusters of CD4 + and γδ T cells associated with protection were identified, consistent with their known role in malaria immunity.Additionally, EMRA CD8 + T cells and CD56 + CD8 + T cell clusters were associated with protection.In a cohort from a malaria-endemic area in Gabon, these CD8 + T cell clusters were also associated with parasitemia control in individuals with lifelong exposure to malaria.Upon stimulation with P. falciparuminfected erythrocytes, CD4 + , γδ, and EMRA CD8 + T cells produced IFN-γ and/or TNF, indicating their ability to mediate responses that eliminate malaria parasites. R E S E A R C H A R T I C L E JCI Insight 2024;9(8):e170210 https://doi.org/10.1172/jci.insight.170210 SPZs have shown that, although circulating antigen-specific CD8 + T cells were present in the periphery, their frequency was much higher in the liver (11,12).The development of mass cytometry provides the opportunity to study, in much greater detail, the specific immune cells associated with vaccine immunogenicity or efficacy (13,14).In a recent study investigating the cellular immune profile and dynamic changes in immune responses to malaria parasites in preexposed Africans and malaria-naive Europeans, mass cytometry was capable of identifying novel immune signatures associated with naturally acquired immunity that controls parasitemia (15). In the present study, we used mass cytometry (16) to characterize the immune profiles of malaria-naive Europeans vaccinated with PfSPZ-CVac [CQ] or placebo prior to CHMI, as well as the changes in immune responses after CHMI to identify immune profiles and function associated with protection at a much higher granularity compared with conventional flow cytometry.With this vaccination approach, we found that CD8 + T cell responses can be induced, detectable in human peripheral blood, and associate with vaccine-induced protection. Results Study participants.Of the 12 volunteers studied, 4 received saline (placebo group) and 8 were vaccinated 3 times with PfSPZ-CVac [CQ] at 28-day intervals (6).To assess the ability of PfSPZ-CVac [CQ] to induce protective immunity, all 12 volunteers underwent CHMI by direct venous inoculation of live PfSPZ (6).All 4 volunteers in the placebo group developed parasitemia following CHMI, as indicated by thick blood smear (TBS) and PCR data (Supplemental Table 4; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.170210DS1).Four of 8 volunteers were vaccinated with 3 doses of 3,200 SPZ and the rest with 12,800 SPZ.Irrespective of the vaccine dose, 4 of 8 volunteers remained thick blood smear (TBS) and PCR negative throughout the 21-day follow-up period after CHMI and were referred to as protected (Supplemental Table 4).The remaining 4 vaccinated volunteers, developed parasitaemia and were referred to as nonprotected (Figure 1B and Supplemental Table 4).Four volunteers (2 per vaccine dose) developed parasitemia (referred to as the nonprotected group) (Figure 1B and Supplemental Table 4).Baseline demographic data for the volunteers are shown in Table 1.Groups were similar with regard to age, sex, and BMI. Vaccine-induced immune cells detected prior to CHMI that associate with protection against malaria.We investigated the immune response to PfSPZ-CVac [CQ] associated with protection at the pre-CHMI time point (c-1; 8-10 weeks after the third PfSPZ-CVac [CQ] vaccination and 1 day prior to CHMI).Unsupervised clustering using hierarchical stochastic neighbor embedding (HSNE) (17,18) identified a total of 103 distinct cell clusters (Supplemental Figure 1), classified into lineages and subsets (Supplemental Table 1) based on their marker expression.Seven major immune lineages were annotated: CD4 + T cells, CD8 + T cells, γδ T cells, unconventional T cells, B cells, and innate lymphoid cells (including NK cells) as well as monocytes and DCs (Supplemental Figure 1). The difference in cell frequency prior to CHMI among the placebo, nonprotected, and protected groups was calculated using generalized linear mixed-effects (GLME) model (FDR ≤ 0.05), controlled for within-individual variation.Although distinct patterns were seen in the distribution of cells, as visualized in the HSNE map among the placebo, nonprotected, and protected groups (Figure 2A), there were no statistical differences in the percentage of cells at the lineage level among the groups prior to CHMI (Supplemental Figure 2A).At the subset level, only the frequency of terminally differentiated effector memory cells reexpressing CD45RA, EMRA (CD45RA + CCR7 -) CD8 + , and CD56 + CD8 + T cell subsets were increased significantly in the protected group compared with the nonprotected and placebo groups (Figure 2B and Supplemental Figure 2, B and C).A more detailed examination of the CD56 + CD8 + T cell subset revealed that clusters 96 and 99 displayed differential expression of CD45RO (Supplemental Figure 1B).In addition, unlike many other CD56 + CD8 + T cell clusters, these clusters lacked the expression of CD161, CD27, or CD127 (Supplemental Figure 1B).Thus, these cell clusters accounted for the association of the CD56 + CD8 + T cell subset with protection as tested by CHMI (Figure 2, C and D). Cluster 67 of EMRA CD8 + T cells, distinguished by its lack of CD27 expression (Supplemental Figure 1B), was also significantly more abundant in the protected group compared with the nonprotected and placebo groups (Figure 2E).In addition, we found that cluster 74 of EM CD8 + T cells expressing HLA-DR and CD38 (Supplemental Figure 1B) were significantly increased in the placebo group compared with the vaccinated groups (Figure 2F).JCI Insight 2024;9(8):e170210 https://doi.org/10.1172/jci.insight.170210Moreover, we characterized in more depth, CD4 + T cells and γδ T cells that Mordmüller et al. (6) identified to be associated with protection.We observed that the frequency of the EM CD4 + T cell cluster expressing PD1 (cluster 37) but negative for CD161 and CD28 (Supplemental Figure 1A) and γδ T cell cluster 81 expressing CD56, CD45RO, and CD11c (Supplemental Figure 1C) was significantly higher in the protected groups compared with the nonprotected and the placebo groups (Figure 2, G and H). These results show, at a high resolution, which clusters of EM CD4 + T cells and memory γδ T cells, that were found by conventional flow cytometry (6), are associated with protection against malaria.Importantly, we also detected distinct CD8 + T cell populations in peripheral blood that were induced by vaccination with PfSPZ-CVac [CQ] and were associated with protection against malaria. Dynamic changes of immune cell clusters following CHMI.Next, the changes over time following PfSPZ challenge in the 3 groups (placebo, nonprotected, and protected) were examined. The change in cell frequencies per group between time points before CHMI (c-1) and 11 days after CHMI (d11) were assessed using the GLME model (FDR ≤ 0.05), including time as fixed effect and sample ID as random effect.Within each of the groups, we observed variations in the frequency of cell clusters over time.Notably, the placebo group exhibited more pronounced changes, potentially linked to their initial exposure to malaria parasites.In contrast, the observed changes in the nonprotected and protected groups may reflect cumulative effects stemming from the vaccinations and the associated protection (or lack thereof) (Supplemental Table 5).However, it is important to acknowledge that these results did not consider the potential confounding by immune profiles prior to vaccination. To address this, we incorporated an interaction term among the groups and the CHMI time points (c-1 and d11) in our analysis.This approach allowed us to statistically test whether changes over time are significantly different among the groups, thus identifying immune cell populations associated with protection with respect to placebo.At the lineage level, no differences in cell frequencies were seen between groups.However, within the protected group, the frequency of EMRA CD8 + T cell subset decreased significantly over time (Figure 3A).This was further illustrated at the cluster level by the significant decrease at d11 of the frequency of EMRA CD8 + T cell cluster 67, which was seen to be elevated at the prechallenge time point in the protected group (Figure 3, B and C).Similarly, a decrease was observed in CD56 + CD8 + T cell cluster 96, which again was significantly higher at prechallenge (Figure 3, B and D).This may reflect that upon exposure the reactive cells induced by vaccination relocate from the periphery to tissues, such as the liver, where they may play a protective role against malaria.Alternatively, these cells can become activated or die.In addition, we show that, in the protected group, the frequency of γδ T cells expressing CD56, CD45RO, and CD11c (cluster 81), which was significantly elevated prechallenge, decreased significantly (Figure 3, B, E, and F).In contrast, the frequency of CD161 -EM CD4 + T cells expressing PD1 (cluster 37), which was also higher at prechallenge, increased significantly.A new cluster emerging from this analysis, the CD161 + EM CD4 + T cells (cluster 22) (Supplemental Figure 1A), which has been shown to be associated with protection in individuals with naturally acquired immunity in endemic areas (15), increased significantly in the protected group (Figure 3G).Only one cell cluster differentially changed in the placebo group after CHMI (Figure 3B); the frequency of EM CD8 + T cells expressing HLA-DR and CD38 (cluster 74), which was high at the prechallenge time point in the placebo group, decreased significantly at d11 (Figure 3H).The cell abundance of cluster 74 at d11 in the placebo group is similar to that observed at c-1 in the protected and the nonprotected groups (Figure 3H), which might suggest that these cells respond specifically to encountering PfSPZ inoculum. Overall, these results suggest that upon challenge the CD56 + and EMRA CD8 + T cell clusters, which we were able to detect using mass cytometry, leave the peripheral blood, as do γδ T cells, possibly to mediate protection against malaria parasites in the liver.At the same time, CD4 + T cells increase over time in the protected group, confirming earlier reports of their role in immunity. Comparison of CD8 + T cells in naturally acquired and vaccine-induced immunity.Given the current detection and characterization of CD8 + T cell responses in vaccine-induced immunity to malaria, we next asked whether we could find similar CD8 + T cell clusters in naturally acquired immunity in people with lifelong exposure to malaria. To this end, we used a data set generated in a CHMI study in Gabon (LaCHMI-001 trial; ClinicalTrials.govNCT02237586) (19), which identified individuals with naturally acquired immunity that controlled parasitemia, using mass cytometry panel with an identical set of markers as those used in our panel and applied our analysis pipeline (15).In this study, a total of 20 individuals from Gabon with prior malaria exposure and 5 malaria-naive Europeans underwent a CHMI (15).All malaria-naive individuals subsequently tested positive for malaria infection.Within the group with prior malaria exposure, 12 of 20 participants developed parasitemia as determined by TBS, while 8 participants did not develop parasitemia up to 28 days after the CHMI, when they received presumptive treatment.The frequencies of cell clusters prior to CHMI (c-1) and following PfSPZ challenge (at d11) were assessed using generalized linear mixed models (P < 0.05), comparing malaria-naive Europeans, preexposed susceptible Africans (Africans who develop parasitemia within 28 days after the CHMI) and preexposed resistant Africans (Africans who did not develop parasitemia up to 28 days after the CHMI).Using unsupervised clustering with HSNE in Cytosplore, CD56 + , EMRA, EM, CM, and naive subsets were identified within the CD8 + T cell compartment (Figure 4A).The frequency of these subsets did not differ significantly among the groups.However, at the cluster level, 7 clusters of CD8 + T cells were significantly different among the groups prior to CHMI (Figure 4B). Interestingly, a cluster of EMRA CD8 + T cells (cluster 16) was strongly associated with resistance as assessed after CHMI (Figure 4B) and found to significantly decreased over time in preexposed Africans (Figure 4C).This is a cluster of EMRA cells phenotypically similar to cluster 67 of EMRA CD8 + T cells (Supplemental Figure 1B), which as described above in our vaccine trial were high prior to CHMI and significantly decreased at d11 in PfSPZ-CVac [CQ] protected vaccinees (Figure 3C).Similarly, cluster 6 of CD56 + CD8 + T cells (Figure 4B) was found to be high in the resistant group at c-1 (Figure 4D), which is in line with the finding of our trial that cluster 96 of CD56 + CD8 + T cells (Supplemental Figure 1B) was high prior to CHMI in the vaccinated group that were protected, compared with other groups (Figure 2C).However, no significant decrease was seen at d11 for cluster 6 (Supplemental Figure 2D).Taken together, these data indicate that similar clusters of CD8 + T cells are associated with both naturally acquired and PfSPZ-CVac [CQ]-induced immunity.Functional responses associated with control of parasitemia.We next assessed functional responses by measuring intracellular expression of cytokines in response to P. falciparum-infected red blood cells (PfRBCs) in CD4 + , CD8 + , and γδ T cells, which were found to be associated with PfSPZ-CVac [CQ]-induced protection.Specifically, while antigen-specific robust IFN-γ and TNF responses were measured following PfRBC stimulation (Supplemental Figure 3A), compared with uninfected red blood cells (uRBCs) as control (Supplemental Figure 3B), other cytokines (IL-2, IL-4, IL-5, IL-13, IL-17, and IL-10) were produced in negligible amounts. Prior to CHMI, we observed, within the CD8 + T cell compartment, a significant increase in IFN-γ-producing (but not TNF-producing) CD56 + CD8 + T cells (cluster 73) as well as EMRA CD8 + T cells (cluster 60) in the protected group (Figure 5, A-C).This indicates that EMRA cells induced by PfSPZ-CVac [CQ] are not hyporeactive and are able to respond to PfRBCs.Furthermore, EM CD4 + T cells also responded to PfRBC stimulation.Indeed, a cluster of EM CD4 + T cells expressing CD161, CD28, and PD1 (cluster 33) producing IFN-γ was found in higher frequency in the protected group prior to CHMI (Figure 5, A and D).Similarly, an increased frequency of IFN-γ-and TNF-producing CD56 + γδ T cells (cluster 8) expressing CD161 and CD45RO (Figure 5, A and E) was seen following PfRBC stimulation.However, a higher frequency of cytokine-producing γδ T cells was found in the placebo group compared with the nonprotected and protected groups. A number of changes in cytokine-producing cell frequencies between c-1 and d11 (Figure 5F) that were not statistically significant following FDR correction might be worth considering.For example, we could see that the proinflammatory cytokine-producing CD56 + (cluster 73) and EMRA CD8 + T cells (cluster 60) decreased after PfSPZ challenge (Figure 5F and Supplemental Figure 3, C and D), in line with the decrease over time in the protected group of CD56 + and EMRA CD8 + T cell frequencies, as seen in Figure 3, C and D, suggesting the R E S E A R C H A R T I C L E JCI Insight 2024;9(8):e170210 https://doi.org/10.1172/jci.insight.170210relocation of such cells to the liver.Interestingly, the frequency of CD56 + γδ T cells (cluster 8) producing proinflammatory cytokine was also seen to decrease at d11 (Figure 5F and Supplemental Figure 3E).This decrease could potentially be attributed to the relocation of these cells to the liver, resulting in reduced frequency of cytokine-producing cells.This observation aligns with the declining frequency of these cells from c-1 to d11.This also agrees with previous observations in endemic regions, where repeated malaria exposure has been shown to result in decreased frequency of proinflammatory cytokine-producing γδ T cells to antigen reexposure. Discussion Using high-dimensional immune profiling, we could detect clusters of CD56 + and EMRA CD8 + T cells associated with PfSPZ-CVac [CQ]-induced immunity to malaria.In addition, we identified clusters that might be responsible for the reported association of CD4 + and γδ T cells with vaccine-induced immunity to malaria (6).Specifically, with regard to CD4 + T cells, we found total CD161 + EM CD4 + T cells and IFN-γ producing CD161 + CD4 + T cells to increase over time in the protected vaccinees.This is consistent with previous data showing their association with naturally acquired immunity, induced after repeated exposure to malaria parasites in endemic areas (15).In our study, vaccinees received 3 inoculations of live chemoattenuated PfSPZ prior to CHMI.This regimen resulted in repeated exposure to early blood-stage malaria parasites that start to resemble what is seen in natural infections, which might explain the expansion of IFN-γ-producing CD161 + EM CD4 + T cells that contribute to vaccine-induced immunity as well as to naturally acquired immunity.Considering γδ T cells, we found that the population associated with protection expressed CD56.NK-like γδ T cells have been described to be associated with clinical immunity against malaria as a result of multiple exposures (20,21), and it was reported that this also leads to a decrease in their proinflammatory cytokine production (21).Therefore, in our study, the repeated exposure, inherent to PfSPZ-CVac [CQ], could explain (a) the higher induction of CD56 + γδ T cells associated with protection and (b) the decrease in the placebo group in the frequency of IFN-γ-or TNF-producing CD56 + γδ T cells in response to malaria antigens in vitro on d11 after CHMI. The protective CD8 + T cell responses against malaria, induced by both irradiated SPZs and various subunit vaccines (22)(23)(24), have been found to be correlated with liver-stage immunity (25,26).Here, we could identify specific cell clusters and report that CD56 + CD8 + T cells and EMRA CD8 + T cells are associated with protection against malaria in both malaria-naive Europeans immunized with PfSPZ-CVac [CQ] and Africans with lifelong exposure to malaria.Interestingly, in both populations, the frequency of CD56 + and EMRA CD8 + T cells was high prior to CHMI and decreased over time in the vaccine-induced or naturally acquired protected group exposed to CHMI, suggesting the migration of cells to the liver, where CD8 + T R E S E A R C H A R T I C L E JCI Insight 2024;9(8):e170210 https://doi.org/10.1172/jci.insight.170210cells can directly act against liver-stage parasites (9,11,27).In this regard, an important study has shown that CD8 + tissue-resident T cells obtained by fine needle biopsy of liver have counterparts in peripheral blood, showing transcriptional similarity (25), which might warrant linking the activity of some circulating CD8 + T cells to their activity in the liver. The CD56 + CD8 + T cells that we found associated with protection might represent NKT cells, although specific NKT cell markers are required for their definitive identification (28).NKT cells are thought to prevent the development of blood-stage malaria by inhibiting the proliferation of parasites in the hepatocytes (29).There is currently limited evidence suggesting that EMRA CD8 + T cells play a role in protection against malaria.However, they are induced by live yellow fever vaccine (YF-17D) (30,31) and are possibly involved in protection induced by the tetravalent live attenuated dengue vaccination (TV003) (32).EMRA cells are suggested to derive from antigen-specific cells reexpressing CD45RA as an indicator of highly functional memory CD8 + T cells (30,33), yet the loss of CD27 on EMRA cells has been associated with low expansion potential in response to antigen stimulation in vitro (33,34).Here, for malaria, we report that the frequency of total CD27 -EMRA CD8 + T cells and the frequency of IFN-γ + CD27 -EMRA CD8 + T cells was high at c-1 in the protected group in contrast to their EM counterpart.These observations suggest that EMRA cells might be a terminally differentiated but fully functional effector T cell population in the protected group.The significant decrease in EMRA frequency observed at d11 in the protected group might indicate that these cells respond specifically to PfSPZ infection after reexposure.This observation is in line with a previous report indicating that EMRA CD8 + T cells retain epigenetic markers that promote rapid effector function (35).The remaining question is how these cells are formed and where they relocate after the challenge.Analysis of the immune response before any vaccination and analysis of the TCR clonotype may help develop understanding of the specific kinetics of these cells in response to malaria infection.In addition, despite the obvious drawbacks, the inclusion of hepatic fine needle aspirates (25) to investigate the relationship between circulating EMRA CD8 + T cells and their liver-resident counterparts may provide further insight into the protective role of EMRA CD8 + T cells in human malaria infection. Furthermore, we found the EM CD8 + T cell cluster, positive for HLA-DR and CD38, to be significantly depleted in the vaccinated group compared with the placebo group.Interestingly, this contrasts with results of a subunit vaccine, AdCA (an adenovirus-vectored malaria vaccine expressing P. falciparum circumsporozoite protein [CSP] and apical membrane antigen-1 [AMA1]), which did not induce a sterile protection (24).The AdCA vaccination induced an increase in antigen-specific CD8 + T cells expressing CD38 and HLADR at day 22 after immunization compared with baseline (before immunization).In our study, the after immunization time is 8-10 weeks after the last inoculation, which might then allow these cells to migrate into the lymph nodes or tissues of the PfSPZ-CVac [CQ]-immunized volunteers. It is important to acknowledge the limitations inherent to controlled human infection studies, notably the small sample size, which can affect the statistical analysis and generalizability of findings.Further studies in a larger number of individuals vaccinated with chemoattenuated malaria parasites are needed to confirm these findings, to unequivocally establish that the cell clusters identified here represent correlates of immunity or might mediate protection against malaria parasites.Another limitation of this study is that we have tested whole parasite antigen extract and do not have data on the specific antigens that drive the responses; again future studies could contribute to this knowledge gap.Moreover, it would be interesting to assess whether the changes in overall cellular response to malaria vaccine and challenge through CHMI are different from responses to other live attenuated vaccines to establish whether there are some general rules on how certain types of vaccine responses can be optimized.The absence of baseline data is also a limitation.However, the analysis of baseline data on previously published data of the same cohort showed no differences among the groups (6).Nevertheless, the inclusion of baseline samples (before vaccination) could have strengthened our conclusions regarding the effect of vaccination with PfSPZ-CVac on the development of specific immune responses associated with protection.Notwithstanding these limitations, this work highlights how combining high-dimensional single-cell analysis with vaccination and the controlled human infection has the potential to identify cell populations involved in protection against malaria. Methods Sex as a biological variant.Both male and female individuals were included in the study.The sex ratio is provided in Table 1.The potential influence of sex on the observed outcomes was not accounted for in our analysis. R E S E A R C H A R T I C L E JCI Insight 2024;9(8):e170210 https://doi.org/10.1172/jci.insight.170210 Study population and sampling.Samples for this study were collected as part of the TüCHMI-002 trial, a randomized, placebo-controlled, double-blind study, evaluating the inoculation of aseptic, purified, cryopreserved, nonirradiated PfSPZ by direct venous injection to malaria-naive, healthy adult volunteers from Germany, who were taking CQ as chemoprophylaxis against malaria (Sanaria PfSPZ-CVac [CQ]).Healthy malaria-naive patients, aged 18-45 years, were divided into 2 main groups: the control or placebo group and the experimental group.Within the experimental group, volunteers were divided according to the concentration of P. falciparum SPZs (PfSPZ) they received during the immunization phase: a low dose (3,200 SPZ) or a medium dose (12,800 SPZ).Volunteers received 3 PfSPZ immunization or saline buffer at 28-day intervals, in combination with a weekly dose of 5 mg/kg or 310 mg CQ for 5 days after the last immunization, after an initial dose of 10 mg/kg or 620 mg CQ 2 days before the first immunization.Eight to 10 weeks after the last immunization, all volunteers (including those in the placebo group) underwent a CHMI trial.The parasitemia following the challenge was evaluated daily, using both qPCR and TBS, starting from day 6 after challenge up to day 21.The outcomes for parasitemia for each donor can be found in Supplemental Table 4.A TBS slide was considered positive when two separate readers detected at least 2 parasites in 300 reading fields.In cases where discrepancies arose in parasitemia results between the two readers, a third reader was asked to assess the slide.If only 1 parasite-like structure was detected by the readers, more reading fields were assessed.The sample was declared negative in cases in which no other parasite could be found.Correlation between qPCR and TBS results was seen, as presented in Supplemental Table 4. Notably, within the protected group, all participants tested negative for P. falciparum infection in both qPCR and TBS.Samples used in this study were collected on c-1 and d11 for PBMCs isolation by density-gradient centrifugation and cryopreserved for subsequent immunophenotyping assays (Figure 1A). To compare vaccine-induced immunity with naturally acquired immunity, we used publicly available data from a previous study conducted in Gabon (ImmPort accession SDY1734) (15,19).This allowed us to analyze and compare specifically the CD8 + T cell immune responses in these 2 groups. Cell processing.Cryopreserved PBMCs were thawed in RPMI medium supplemented with penicillin/ streptomycin (RPMI pen/strep ) and complemented at 50% with heat-inactivated (hi) FCS.After thawing, 1-2 million cells were aliquoted in a 5 mL Eppendorf tube for direct staining with the panel in Supplemental Table 2.The remaining cells were rested for 4 hours at 37 o C in culture medium (RPMI pen/strep + 10% hiFCS) at a concentration of 1 million cells/mL.All samples met the acceptance criteria of more than 80% viability after thawing and resting.Next, to assess response to malaria antigens, PfRBCs were used as source of antigen.The PfSPZ-CVac immunization procedure leads to the exposure of the immune system not only to SPZs, but also to infected hepatocytes and even early blood-stage parasites.PfSPZ-CVac immunization includes the inoculation of SPZs that invade liver cells subsequent to injection, where they differentiate and multiply for about 1 week, followed by release into the bloodstream and infection of erythrocytes in the form of merozoites.Therefore, 1-3 million of the rested cells per sample were stimulated for 24 hours with PfRBCs or uRBCs at a concentration of 1:1 at 37 o C and 5% CO 2 .Brefeldin A was added to the cells 4 hours before the end of the culture.PfRBCs and mock-cultured uRBCs were obtained through purifying NF54 asexual-stage cultures using a MACS column (Miltenyi Biotech, 130-042-401) and cryopreserved in 15% glycerol/PBS.After stimulation, cells were transferred to a 5 mL Eppendorf tube for staining with the panel in Supplemental Table 3. Staining procedures.Prior to staining, cells were incubated with 1 mL 500 μM Cell-ID intercalator-103Rh (Fluidigm, catalog 201103A; ×500 dilution) for 15 minutes at room temperature to identify dead cells.Subsequently, cells were washed with 2 mL staining buffer (DVS science, catalog 201068).Cells were next incubated for 10 minutes with 50 μL human TruStain FcX Fc-receptor blocking solution (Biolegend; ×10 dilution) at room temperature.Then, 50 μL of the surface antibody cocktail was added to the cells (Supplemental Tables 1 and 2) for 45 minutes at room temperature.After staining, cells were fixed with 1 mL of ×1 MaxPar Fix I buffer (Fluidigm, catalog 201065) for 20 minutes at room temperature.Stimulated cells were then stained a second time with 50 μL of an intracellular antibody cocktail (Supplemental Table 2) freshly prepared in permeabilization buffer (Fluidigm, catalog 201066) for 30 minutes at room temperature.After staining, cells were incubated overnight at 4°C with 1 mL 125 μM Cell-ID Intercalator-Ir (Fluidigm, catalog 201192A; ×1,000 dilution) in MaxPar Fix and Perm buffer (Fluidigm, catalog 201067). Mass cytometry (CyTOF) measurements.Cells were acquired with the Helios mass cytometer (Fluidigm) at a concentration of 1 × 10 6 cells/mL in Milli-Q water (Ultrapure water systems, PureLAB Ultra) complemented with 10% EQ Four Element Calibration Beads (Fluidigm, catalog 201078).In addition to the antibody panel detection channels (including intercalator channels), calibration bead (140Ce, 151Eu, 153Eu, 165Ho, and 175Lu) and contamination (133Cd, 138Ba, and 208Pb) channels were activated.After cell acquisition, data from the calibration bead data were used to normalize the signal fluctuations.The normalized FCS files were exported and analyzed with FlowJo V10 (TreeStar) to exclude EQ beads and select live CD45 + cells (Supplemental Figure 4A).The FCS files from selected live CD45 + cells were then analyzed using the hierarchical stochastic neighborhood embedding (HSNE) method on Cytosplore (17,18), a dimensionality reduction visualization tool (Supplemental Figure 4B).The HSNE method of clustering enables the selection of cell landmarks (also referred to as clusters) per level based on the similarity of their marker expression.At the first level of the HSNE (overview level), major cell lineages were defined (i.e., CD8 + T cells).Next, we zoomed into each lineage separately to a more detailed level where we defined cell subsets (i.e., CD56 + CD8 + T cells).The HSNE density plot allows the visualization of the distribution of the cells, indicating their similarity in the HSNE embedding, suggesting distinct immune signatures in a given immune compartment.The same process was repeated until the clusters level was reached (Supplemental Figure 4B).Each cluster represents a unique phenotype (Supplemental Figure 1).For each cluster, the percentage of IFN-γor TNF-producing cells was filtered using a manually determined cytokine threshold, and frequencies of cytokine-producing cells were calculated per cluster.All antigen-specific cytokine frequencies to PfRBCs are reported after uRBC subtraction, as a percentage of parents. Statistics.All statistical analyses were performed using R software version 4.0.2(36).The GLME model was used to assess the difference in cell clusters between the study groups and the changes in cell frequency after PfSPZ inoculation (37).Similarly, the GLME model was used to assess the difference at c-1 and the change after CHMI in the frequency of cytokine-producing cells per group.Specifically, to assess the change in cell frequencies over time, an interaction term among the groups variable and the time points variable was included.The model was computed using the GLMER package in R software.P values were adjusted for multiple comparisons using the FDR method.The significance level was set at P ≤ 0.05 after FDR correction.For the phenotyping assay, the frequency of each cell cluster as percentage of total cells (CD45 + cells) was determined.For cytokine production after stimulation, the frequency of cytokine-producing cells was determined as a percentage of their respective cluster.Using the R package survminer, we generated survival curves. Study approval.The TüCHMI-002 trial was approved by the ethics committee of the Medical Faculty and the University Clinics of the University of Tübingen and registered at ClinicalTrials.gov(ClinicalTrials.gov NCT02115516) and in the EudraCT database (2013-003900-38).The trial followed the principles of the Declaration of Helsinki, Good Clinical Practice, and Good Clinical and Laboratory Practice.The study was carried out under FDA IND 15862 and with the approval of the Paul-Ehrlich-Institute (Langen, Germany). Figure 1 . Figure 1.TüCHMI trial and outcome.(A) Healthy volunteers included in the trial were split in 2 main groups: the experimental group (in brown) consisted of volunteers (n = 8) receiving 3 doses of PfSPZ vaccine at 28-days intervals (V0, V28, and V56) in combination with a weekly dose of chloroquine up to 5 days after the last inoculation (V61) (PfSPZ-CVac [CQ]), and the placebo group (in blue), which consisted of volunteers (n = 4) inoculated with saline buffer.Eight to 10 weeks after the last inoculation, all volunteers in both the experimental and placebo groups underwent a CHMI trial.Immune responses to PfSPZ-CVac [CQ] inoculation were assessed at c-1 (1 day before the challenge [c0]) and d11 (11 days after the challenge).(B) Proportion of protected volunteers.Kaplan-Meier survival curves for days to parasitemia determined by thick blood smear for PfSPZ-CVac [CQ]-vaccinated (brown) and placebo (blue) groups.Volunteers in the placebo group all became malaria positive by day 18 after CHMI, while in the vaccinated group, some volunteers (4 of 8 volunteers vaccinated in total) remained malaria negative up to 21 days after CHMI. Figure 2 . Figure 2. Vaccine-induced immunity associated with protection prior to CHMI.(A) Hierarchical stochastic neighbor embedding density maps showing differences in major cell lineages among volunteers n the placebo (n = 4), nonprotected (n = 4), and protected (n = 4) groups.The cell density per individual map is indicated by color.(B) Heatmap summary of Z scores of the normalized cell count per cell subset per group, where colors represent the mean Z score as indicated.(C) Box plots showing the frequency of CD56 + CD8 + T cell cluster 96, (D) CD56 + CD8 + T cell cluster 99, (E) EMRA CD8 + T cell cluster 67, (F) HLA-DR + CD38 + EM CD8 + T cells, (G) CD4 + T cell cluster 37, and (H) CD56 + γδ T cell cluster 81 relative to CD45 + cells, comparing placebo (n = 4, blue), nonprotected (n = 4, gray), and protected (n = 4, orange) groups.The box plots represent the median and first and third quantile, and the whiskers represent the maximum/ minimum, no further than 1.5 times the interquartile range (IQR).*P ≤ 0.05, **P < 0.01, ***P < 0.001, computed using the GLME model after FDR correction. Figure 3 . Figure 3. Dynamic changes of immune cell clusters following CHMI.(A) Heatmap summary of log 2 fold change (FC) of cell subsets from c-1 to d11.(B) Circos heatmap showing the log 2 FC from c-1 to d11 for each cluster per group.Clusters that significantly change overtime are in red.(C) The frequency of EMRA CD8 + T cell cluster 67, (D) CD56 + CD8 + T cell cluster 96, (E) CD56 + γδ T cell cluster 81, (F) CD161 -EM CD4 + T cell cluster 37, (G) CD161 + EM CD4 + T cell cluster 22, and (H) HLA-DR + CD38 + EM CD8 + T cells (cluster 74), from c-1 to d11 per placebo (n = 4, blue), nonprotected (n = 4, gray), and protected (n = 4, orange) groups.Data in C-H are presented as box plots, representing the median and first and third quantile, while the whiskers indicate the overall data range no further than 1.5 times the interquartile range (IQR).The interaction among the groups and the time points was computed in a generalized linear mixed models (GLMM) for binomial family) model to assess the dynamic change overtime.*P ≤ 0.05, **P < 0.01, ***P < 0.001 after FDR correction.The abundance of the indicated clusters is given as a percentage of CD45 + cells. Figure 4 . Figure 4. Characterization of CD8 + T cells in naturally acquired immunity.(A) HSNE plots showing cell subsets within the CD8 + T cells lineage (left), annotated based on the indicated markers (right).Colors represent the arc-hyperbolic sine 5-transformed (arsinh-5-transformed) marker expression as indicated.(B) Heatmap of CD8 + T cell clusters, showing expression of markers as median signal intensity after arsinh transformation.Each cluster has a unique cluster number, and the subset to which each cluster belongs is shown at the top.The generalized linear mixed models (GLMM) for binomial family was used to compare cluster abundance among malaria-naive Europeans (n = 5, blue), lifelong exposed susceptible Africans (n = 12, green), and resistant Africans (n = 8, pink).Colored stars below the clusters indicate statistical significance in naive Europeans and lifelong exposed resistant Africans.(C and D) Box plot representing the median and first and third quantile of the frequency of (C) EMRA CD8 + T cells (cluster 16) and (D) CD56 + CD8 + T cells (cluster 6), both relative to CD45 + cells.The whiskers of the box plots indicate a range no further than 1.5 times the interquartile range (IQR).*P ≤ 0.05, **P < 0.01, ***P < 0.001.P values were computed using the generalized linear mixed models (GLMM) for binomial family. Figure 5 . Figure 5. Cytokine response to PfRBC stimulation.(A) The heatmap on the left shows the expression of markers as median signal intensity after arsinh transformation for CD8 + , CD4 + , and γδ T cells.Each cluster has a unique number.The heatmap on the right shows the summary of Z scores of the normalized frequency of cells producing IFN-γ and TNF per cluster (after subtraction of uRBCs) in placebo (n = 4), nonprotected (n = 4), and protected (n = 4) groups at baseline.The colors indicated represent the mean Z score per cluster per group.(B) Frequency of IFN-γ-and TNF-producing CD56 + CD8 + T cells (cluster 73), (C) EMRA CD8 + T cells (cluster 60), (D) CD161 + EM CD4 + T cells, and (E) CD56 + γδ T cells, given as a percentage of parents among the indicated groups.The data are presented as box plots showing the median, the first, and the third quantile, and whiskers extend to the maximum/minimum, no further than 1.5 times the interquartile range (IQR).*P ≤ 0.05, computed using the generalized linear mixed models (GLMM) for binomial family.(F) Circos heatmap showing the log 2 FC of IFN-γ-(dashed line) and TNF-producing (full line) cells from c-1 to d11.
v3-fos-license
2023-12-30T02:05:20.484Z
2023-12-29T00:00:00.000
266596250
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1371/journal.pgph.0002829", "pdf_hash": "7870ce14b4400db157ab1225bd78dc361bdf769b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1315", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6ed09cd7b14204676034c1113871ee2599c82830", "year": 2024 }
pes2o/s2orc
Changes in the medical admissions and mortality amongst children in four South African hospitals following the COVID-19 pandemic: A five-year review Vulnerable children from poor communities with high HIV and Tuberculosis(TB) burdens were impacted by COVID-19 lockdowns. Concern was raised about the extent of this impact and anticipated post-pandemic surges in mortality. Interrupted time series segmented regression analyses were done using routinely collected facility-level data of children admitted for medical conditions at four South African referral hospitals. Monthly admission and mortality data over 60 months from 01 April 2018 to 31 January 2023 was analysed using models which included dummy lockdown level variables, a dummy post-COVID period variable, Fourier terms to account for seasonality, and excess mortality as a proxy for healthcare burden. Of the 45 015 admissions analysed, 1237(2·75%) demised with significant decreases in admissions during all the lockdown levels, with the most significant mean monthly decrease of 450(95%, CI = 657·3, -244·3) p<0·001 in level 5 (the most severe) lockdown. There was evidence of loss of seasonality on a six-month scale during the COVID periods for all admissions (p = 0·002), including under-one-year-olds (p = 0·034) and under-five-year-olds (p = 0·004). No decreases in mortality accompanied decreased admissions. Post-pandemic surges in admissions or mortality were not identified in children with acute gastroenteritis, acute pneumonia and severe acute malnutrition.During the COVID-19 pandemic, paediatric admissions in 4 hospitals serving communities with high levels of HIV, TB and poverty decreased, similar to global experiences; however, there was no change in in-hospital mortality. No post-pandemic surge in admissions or mortality was documented. Differences in the impact of pandemic control measures on the transmission of childhood infections and access to health care may account for differing outcomes seen in our setting compared to the global experiences. Further studies are needed to understand the impact of pandemic control measures on healthcare provision and transmission dynamics and to better inform future responses amongst vulnerable child populations. of pandemic control measures on healthcare provision and transmission dynamics and to better inform future responses amongst vulnerable child populations. Background The national lockdown regulations promulgated across the globe due to the COVID-19 pandemic disrupted essential healthcare services [1].Emergency outpatient visits and admissions, decreased sharply among children in all countries, especially between February 2020 and December 2021 [2][3][4][5].Decreases of 19%, 50% and 56% in paediatric admissions were documented in Cameroon, South Africa (SA) and across Europe, respectively, compared with pre-COVID-19 time periods [3][4][5].Vulnerable populations, including children who have sub-optimal access to healthcare and who live in poverty, have higher rates of malnutrition and are seen in larger numbers in lower-and middle-income countries (LMICs).These sub-populations were especially negatively affected by the lockdowns [2][3][4]. The decrease in paediatric admissions has been greater in children with communicable (77%) compared with non-communicable diseases (37%) [5].Children with lower respiratory tract infections (LRTI), including viral bronchiolitis, also decreased [6][7][8].Changes in seasonal patterns of viral bronchiolitis when compared with patterns identified in previous pre-COVID-19 years were noted [6].This was postulated to occur due to reduced person-to-person transmission, and it raised concerns that a rebound would occur when transmission mitigating strategies were curtailed [6]. Visits to children's routine immunisation services, decreased significantly across multiple countries, after the start of the COVID-19 pandemic.The promulgation of national lockdown measures restricting movement and cancellation of public transport, at varying levels of severity occurred on 23 March 2020 [4,9].These decreases were documented in both urban and rural primary healthcare facilities [10].Outpatient visits for children with Human Immunodeficiency Virus (HIV) dropped by 41%, and antiretroviral treatment initiation of newly diagnosed children also decreased in 2020 and 2021 [11].These changes in access and utilisation of preventative healthcare and HIV chronic care raised concerns for negative health consequences, especially where poverty, HIV and Tuberculosis (TB) are common and where many live in poverty in high-density communities [11].HIV viral suppression rates, however, were shown to be maintained among children, suggesting some chronic disease programmes remained reasonably robust [12]. Overall, the COVID-19 pandemic disrupted healthcare provision and health-seeking behaviour and was postulated to disproportionately impact specific subpopulations in lowincome countries with fragile health systems and pervasive social-structural vulnerabilities [13].Documentation of these indirect effects of the COVID-19 pandemic has been largely restricted to the period during the peaks of the COVID-19 lockdowns between February 2020 and December 2021 and not adequately documented in communities with high burdens of HIV, Tuberculosis (TB) and malnutrition [11].The impact of varying severities of national lockdowns on admissions and mortality is not known in vulnerable communities that rely on public transport.It is also not known whether the reduction in infectious diseases and a concomitant decrease in mortality due to an overall reduction of disease burden would occur in such communities Concern was also raised about the mortality and morbidity rates rising, specifically in these vulnerable children after the removal of lockdown measures [14]. Children hospitalised in specialist referral hospitals generally require higher levels of medical care and represent the more severe cases [15].This study describes and analyses changes in admission and in-hospital mortality amongst children in South African specialist referral hospitals during the varying national lockdown levels of associated with the COVID-19 pandemic and the post-pandemic period and compares this with the pre-pandemic period. Study design and population We conducted an interrupted time series analysis of routinely collected facility-level data of children below the age of 13 years hospitalised across all four of the largest public sector (nonfee-paying) specialist referral hospitals in the city of Durban (eThekwini District), Kwa Zulu-Natal(KZN).The data included those hospitalised with medical diagnoses only, thus allowing analysis to reflect on the impact of the COVID-19 pandemic, specifically on communicable diseases.In-born neonates and children hospitalised for surgical (general surgery, trauma, ear nose and throat procedures, orthopaedic reasons) or other non-medical reasons (psychiatric and social admissions for respite care or neglect) were purposefully excluded from the analysis. We used data from the King Edward VIII, Mahatma Gandhi Memorial, Prince Mshyeni Memorial and R K Khan Memorial hospitals, which provide 240 in-patient paediatric medical specialist care beds (including designated high care and beds for interim invasive ventilation) for approximately 1,1 million children [15,16].The children admitted to these hospitals are referred by primary healthcare providers (nurse-run day clinics, family practitioners, non-specialist district hospitals) and are generally complex cases requiring higher care levels.Children who require longer-term invasive ventilation (>72 hours) are referred to paediatric intensive care units located at the quaternary hospital.The majority of the children who attend and are hospitalised in these four referral hospitals are from lower socio-economic communities and live in communities with high population densities [15].A documented decline of 37% in routine immunisation coverage with a rapid recovery was seen in the Ethekwini district between April -June 2020 [4].The HIV antenatal seroprevalence of the population served by these hospitals is high at 44�3%(CI;41�6-46�7), reflecting a high burden of both HIV-exposed infants and HIV-infected children [16,17]. The data for the period from 01 April 2018 to 31 January 2023 was retrospectively accessed from 07 February 2023 to 21 February 2023.The study period spanned 60 months and included 23 months in the pre-COVID-19 period (01 April 2018 to 28 February 2020), 23 months of the designated COVID-19 period (01 March 2020 to 31 January 2022), during which one of the five lockdown stages were promulgated and 14 months post COVID-19 period (01 February 2022 to 31 January 2023)when no lockdowns were in place [18,19].Monthly data in the COVID period were thus stratified according to the predominant lockdown level in each of the 23 months in this period. Data collection The admission diagnosis of children included in the facility-level monthly data was obtained from in-patient records that an attending paediatrician validated.Data on hospitalised children included children in all age groups below 13 years of age (SA's referral hospitals have a 13-year-old cut-off for paediatric care), those below one year of age (infant) and those between one and five years of age.Data on hospitalised children under the age of five years with lower respiratory tract infections (LRTI) or acute gastroenteritis (AGE) as their main diagnosis were specifically tracked.In this study, the term LRTI as a diagnostic category includes patients with lobar or bronchopneumonia, bronchiolitis and bronchitis.This categorisation was based on a standardised nomenclature used by clinicians across all sampled hospitals in admission diagnoses and mortality classification.LRTI excludes upper respiratory tract infections (URTI) or upper airway obstruction, asthma or recurrent wheezing [20].In addition, monthly admission numbers of children categorised as having severe acute malnutrition (SAM) using the WHO guidelines were also collected.In all four hospitals, the categorisation of a child under five years of age with SAM is verified by a paediatrician and then independently corroborated by an attending dietician within 72 hours post-admission.This dual verification for nutritional categorisation enables weights post-rehydration to be utilised and for lengths or heights to be rechecked for accuracy.In the WHO nutritional classification system, children are classified as either having severe acute malnutrition (SAM), moderate acute malnutrition (MAM), not acutely malnourished but considered at risk (NAM@risk), or not acutely malnourished (NAM) or as overweight or obese [21,22].The SAM definition was based on weight-for-length z score and/or the presence of nutritional oedema as documented by an attending paediatrician [22].The mid-upper arm circumference (MUAC) scores were not used in this study as the documentation was inconsistent in the reviewed source documents [21,22].The numbers of children who demised monthly in all age categories and specifically those with a diagnosis of LRTI, AGE or SAM under the age of five years were also collected. Verification of data.Four independent databases were utilised over the study periods [23].These databases corroborated and validated information and ensured minimal missing data.Each hospital's paediatric department has an in-hospital database used as the primary database.A specialist paediatrician in each hospital is responsible for verifying and entering all weekly admissions tallies and death information (categorised by age and diagnosis) from original case records into this primary database.Admission and mortality data is also verified monthly by paediatricians in the department from a standardised admission and deaths daily register and then submitted to a facility information officer, which feeds this data to a central district-wide district health information system database (DHIS) [23].In this study, we validated the DHIS data obtained with source data in each hospital from the primary database that the attending paediatricians held to avoid inconsistencies.The third database was the Child Healthcare Problem Identification Programme (Child PIP).Paediatric departments across many SA hospitals utilise this database to record and systematically review child deaths independently, emphasising assessing modifiable factors related to these deaths [24].Mortality figures per hospital were corroborated using the Child PIP and DHIS and verified at each hospital.The fourth database used verified nutritional categorisation of all in-hospital patients, and in-hospital dietitians maintained these databases in each hospital.The databases were rechecked and then verified with the hospital records for discrepancies. Data analysis and interpretation We used descriptive statistics to summarise data and present summaries of admission, mortality and case fatality rates before, during and after the COVID-19 period with lockdowns.We did an interrupted time series segmented regression analysis by fitting linear regression models with the outcome of monthly paediatric admissions.The models included dummy lockdown level variables indicating 1 or 0 for each level 1 (least severe) to 5 (most severe) of lockdown and a dummy variable for the post-COVID-19 period.COVID-19 waves could also have caused an increased burden on the healthcare system, which may have affected paediatric healthcare use and admissions independently from lockdowns.We, therefore, modelled this by including a continuous variable for excess mortality in eThekwini for each month as a proxy for COVID-19-related burden on the healthcare system.To account for seasonal changes due to RSV and other respiratory virus outbreaks and Rotavirus and other viral causes of AGE, we included two pairs of sine and cosine terms (Fourier terms) in the models to account for seasonality.This approach takes account of pre-lockdown trends and allows estimation of the effect of each level of lockdown and whether there was a change in admissions during the period following the cessation of all lockdowns post-COVID.We built separate models by age (under one year, under five years and between 5 and 13 years) and diagnosis (LRTI, AGE and SAM).Age-specific changes thus do not sum to the total change because the total admissions (and deaths) were analysed as a separate time series.We checked for auto-correlation by calculating the auto-correlation and partial autocorrelation functions.We analysed data using R4.0 (R Foundation for Statistical Computing, Vienna, Austria). Ethical consideration Adherence to ethical guidelines was ensured throughout the research process.The study was approved by the University of KwaZulu-Natal Biomedical Research Ethics Committee (BREC/ 00002981/2021), the KwaZulu-Natal Department of Health's Provincial Health Research Ethics Committee, eThekwini District Health Department and the Child Health Identification Programme (National committee) with a waiver for informed consent for analysis of anonymised, routinely collected data. Results During the 60-month study period that extended from 01 April 2018 to 31 March 2023, 45 015 children were admitted across all four specialist hospitals in Durban (eThekwini district).Of these, 20�490 (45�5%) were <1 year of age(infants), 16 549 (36.8%) were children between one and below five years, and 7976(17�7%) were children between five and below 13 years.Across all these age groups, 1237 children died in hospital during the 60 months of the study period, with 733(59�3%) being infants, 346(28%) between one and below five years and 158(12�7%) between five and 13 years.Table 1 compares unadjusted mean monthly admission and mortality numbers and Table 2 compares raw case fatality rates during the three assessed periods.While the mean monthly admission appeared marginally lower in the COVID-19 period, there was less of a decrease in mean monthly deaths.The case fatality rates for LRTI, AGE and SAM in the under-five-year group were higher during COVID-19. The segmented regression analysis showed no significant change in monthly mortality in all ages nor specifically in the age categories of under-1-year-olds and 1-to-5-year-olds and 5-13-year age groups during any lockdown levels, nor the post-COVID period.(Table 3, Fig 2A -2C provide the data and illustrate the trends, respectively). Discussion Our analysis shows that changes in patterns of admissions and mortality of vulnerable SA children following the COVID-19 pandemic do differ from experiences elsewhere in the world.Despite significant decreases in admissions and changes in seasonal patterns of communicable diseases during the COVID-19 lockdowns, there was neither a concomitant decrease in inhospital deaths nor was there an anticipated post-pandemic surge in admissions in children from communities with high levels of HIV, TB and poverty. Several modelling studies and early reviews from LMICs have raised concerns about the impact of the COVID-19 pandemic on vulnerable populations, especially those where fragile healthcare systems exacerbate delayed access to care [10,25].In our study reflecting sick children requiring hospital admission and drawn from low-income communities, a high population density and existing infectious burden admission numbers did decrease, as was documented in high-income countries, following the promulgation of stringent lockdowns [5,13,15,26].These decreases in admissions at referral hospitals mirrored decreases in admissions and visits to primary health clinics [4].Of concern, however, is that the documented decrease in primary care visits and referral hospital admissions could reflect decreased access to healthcare for sick children.Whilst lockdown laws permitted the seeking of healthcare and all facilities remained open through the COVID-19 pandemic, the significant decrease in the admission of sick children raises the likelihood of worsening access to healthcare amongst vulnerable populations.In addition to concerns about decreased access to health care, these findings may reflect the influence of a decreased transmission of common childhood communicable diseases, possibly affected by decreased social interactions and mitigating strategies to prevent COVID-19 transmission [9,10].It has been postulated that increased preventative hygiene habits adopted during the COVID-19 period, like masking, regular hand washing, creche and school closures, and other restrictions impacting person-to-person spread of infections, resulted in modified seasonal patterns of communicable diseases like Rotavirus associated AGE and Respiratory syncytial Virus associated LRTI [6,8,27].The impact of this possible outcome, however, has not been fully understood in vulnerable child populations, including those with high population densities. In this study, which reflects children admitted at referral hospitals, including those with complex problems and diagnoses, mortality numbers in all age groups and children with AGE, LRTI and SAM did not decrease during the lockdown period, unlike previously reported [10].Our finding of the persistence of high mortality despite significant decreases in admissions in the COVID-19 period has been documented elsewhere in poor socio-economic communities [3].The concern with this finding is that children who became sick presented later and were more unwell and were thus more likely to die.Concerns that increases in child mortality may have been seen out of hospitals and in intensive care units are not borne out however by any significant increase in excess childhood mortality as seen in age-specific annualised excess Total (all below 13 years) Under-1-year 1-5 year death rates (per 1000 population) documented over this period from both the community and hospitals [28].We postulate that in our large cohort of children hospitalised in public sector referral hospitals, there are many children, especially those living within high population densities, who continued to have exposure to many childhood infections and continued to have delayed access to care for a multiplicity of reasons.This latter group has been previously documented as experiencing delays in accessing standard healthcare despite the availability of free public health services [29].Many caregivers here are noted to utilise multiple other sources of care, including allopathic, indigenous and home treatments, before presenting at public services, often with severe complications or in severe distress [29].It is possible that caregivers in this sub-group would have persisted with late presentation for acute care, similar to pre-pandemic behaviours or delayed their access to hospital care even later.Further exploration is thus required to determine how this vulnerable group were uniquely affected by the challenges posed both by the COVID-19 pandemic and the associated lockdowns. Our study also documents that the expected surge in malnutrition cases during the lockdown period did not occur, unlike those reported in other studies from developing countries [10,30].The unadjusted higher case fatality rates in SAM in the COVID-19 period cases studies specifically targeting these populations with verifiable microbiological testing may be required to unpack children's behaviours under differing contexts.We further extrapolated immunisation coverage of the study population on district-wide data.This study may help determine the epidemiological patterns of vulnerable children when faced with communicable disease outbreaks in greater detail.We did not focus on neonatal or non-medical admissions or children admitted to intensive care units (ICU) requiring ventilation.Access to intensive care units in our resource-poor areas is limited with only 25 paediatric intensive care beds in KZN, (0.73 beds ICU per 100 000 children), thus our data does reflect the majority of sick admissions [35].We could not assess the definitive socio-economic status and inferred this based on previous usage patterns in public sector hospitals.The retrospective data reflects inhospital mortality specifically and does not include community-based death data. In conclusion, our findings suggest that, in one of the regions most affected by HIV, Tuberculosis and malnutrition, whilst admissions of acutely sick children decreased similar to other countries with better health resources, a decrease in in-hospital mortality and anticipated postpandemic surges in admission was not seen as compared with these countries.This study provides evidence that children in vulnerable communities with high population densities of HIV and TB infection rates behaved differently in communities where these conditions were not as common.These findings suggest that mitigating strategies to reduce infectious disease outbreaks possibly affected transmission dynamics of common communicable childhood diseases differently in communities, and this requires further exploration and study.Further studies in vulnerable populations are needed to identify persisting challenges in healthcare provision, infection transmission dynamics and the impact of promulgation of uniform pandemic control measures on child health outcomes. Fig 3A-3C illustrate these changes and loss of the seasonal patterns in AGE and LRTI seen during the COVID-19 period. Fig 3A-3C illustrate a return to seasonal patterns in the post-COVID period for cases of AGE and LRTI.
v3-fos-license
2020-09-30T13:17:13.166Z
2020-09-30T00:00:00.000
222003893
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2020.583286/pdf", "pdf_hash": "1a5a3e79a6095da0bdf431f4b10cc54d3a919530", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1319", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "1a5a3e79a6095da0bdf431f4b10cc54d3a919530", "year": 2020 }
pes2o/s2orc
Toxicological Risk Assessment of the Accidental Ingestion of a Honeybee (Apis mellifera L.) Present in Food The aim of the present work was to evaluate the possible risk of toxic effects due to the ingestion of a honeybee (Apis mellifera L.) accidentally present in food. The methodology used in this study was a bibliographic survey of studies on the toxic effects related to honeybees, with a critical analysis of the possible risks of accidental ingestion of these insects. The amount of venom present in a bee is considered insufficient to induce detectable toxic effects in a person who ingests it by accident, and various components of the venom are destroyed by gastric secretions. However, despite the rare frequency, there is a risk of the ingestion of a bee, causing an allergic reaction to some components of the venom in sensitized individuals. In addition, pollen carried by a bee may cause an allergic reaction in a sensitive individual. Thus, the accidental ingestion of a bee present in a food does not pose the risk of toxic effects for the majority of the population but may promote allergic reactions in susceptible individuals. INTRODUCTION Honeybees (Apis mellifera L.) are social insects bred for the production of honey, pollen, propolis, royal jelly, wax, and poison, and to promote the pollination of various cultivated plant species. These bees have a sting as their defense mechanism, through which their venom is inoculated (1,2). As honeybees seek products with sugar as their food source, they may end up trapped and incorporated into food (Figure 1), and potentially consumed inadvertently. A number of anecdotal reports of accidental ingestion of insects are largely available (3)(4)(5)(6), but the actual number remains unknown. The intentional consumption of insects by humans, known as entomophagy, is a habit of many populations around the world. Entomophagy has been advocated as a way to increase the availability of foods of recognized nutritional value (7). In the case of honeybees, they are traditionally eaten roasted or grilled in countries such as Japan, China, and Indonesia (8). Thus, it is possible that the heat used in the preparation methods may denature any harmful substances present. However, there is a lack of information in the literature on the toxicological risks arising from the ingestion of raw honeybees. Thus, the aim of the present work was to evaluate the possible risk of toxic effects due to the ingestion of a honeybee accidentally present in food. The methodology used in this study was a bibliographic survey of studies on the toxic effects related to honeybees, with a critical analysis of the possible risks of accidental ingestion of these insects. The possibility that the ingested insect may carry a microorganism with the potential to cause infection was not addressed in this study. THE HONEYBEE VENOM The honeybee sting is a defensive mechanism against predator attacks on individuals or hives (9)(10)(11). The stinger is a modification of the reproductive apparatus and is present only in worker honeybees. The venom, also known as apitoxin, is produced by venom gland cells and is injected at the time of the sting. When the stinger is introduced, it becomes trapped with the venom sac at the sting site and gradually releases the venom (10,12,13). The peptide melittin is the most abundant toxin in honeybee venom, comprising ∼40 to 60% of the dry weight of this venom, consisting of a basic 26-amino-acid polypeptide (12,15,16). Melittin monomers bind to lipid membranes producing pores, exerting a cytotoxic effect (17,18). Furthermore, it acts synergistically with the enzyme phospholipase A 2 , promoting damage to the cellular and mitochondrial membranes of various cell types. Arachidonic acid may be released because of cell damage (10,15). This peptide is probably the major responsible for the bee venom-induced pain through direct and indirect activation of primary nociceptor cells (16). As melittin has various pharmacological activities, including antitumoral (15,19), anti-viral (18,18,20), antibacterial (15,17), antifungal (15), anti-arthritis, anti-inflammatory, anti-atherosclerotic, antidiabetic, and neuro-protective (12) effects, several studies have been conducted to evaluate the safety of the compound when administered orally. The results of these studies indicate that oral ingestion of this peptide results in low toxicity (21)(22)(23). Apamin is a peptide neurotoxin comprising 2 to 3% of dry honeybee venom. This peptide is a specific inhibitor of small-conductance Ca 2+ -activated K + (SK) channels in the central nervous system (13,15) and activates the M 2 inhibitory muscarinic receptors of the peripheral nervous system (10,14,15). Another activity is blockage of the Kv1.3 channel, a potassium channel type found in immune cells (24). Pharmacologically, apamin has antibacterial, antifungal, anti-inflammatory, anti-atherosclerotic, and antitumoral effects (13) and has been tested for treating neurological disturbances, including Parkinson's disease and learning deficit disorder (15). The main enzymes present in the venom are phospholipases and hyaluronidase Phospholipase A 2 , comprises 10% to 12% of dry bee venom, promotes the disruption of the cytoplasmic membrane by the destruction of the constituent phospholipids, resulting in cell lysis (10,14,15,25). This enzyme catalyzes the hydrolysis of glycerophospholipids, releasing fatty acids and lysophospholipids (25). P was also found to have antibacterial, trypanocidal, antitumoral, neuroprotective, and hepaprotective activities (15). The other enzyme found in honeybee venom is hyaluronidase, which comprises 1 to 3% of venom. This enzyme promotes the fast tissue diffusion of venom through tissue disruption. Hyaluronidase causes the hydrolysis of hyaluronic acid in the extracellular matrix (10,(13)(14)(15). Other activities of hyaluronidase include mast cell degranulation and rise in blood vessel permeability (13). Phospholipases and hyaluronidase are the allergenic proteins in the venom, being responsible for cases of anaphylactic reaction to honeybee venom (13,15,26). The enzyme phospholipase A 2 , isolated from microorganisms and vertebrate animals, is used in the processing of certain foods, and it has been experimentally verified that the consumption of phospholipase A 2 residues does not represent a toxicological risk (27,28). However, some cases of allergic reactions to honeybee phospholipase A 2 residues present in honey consumed by sensitized individuals have been identified (29). The peptide MCDP acts on mast cells, promoting degranulation releasing histamine, and consequent inflammation (13,15). Paradoxically, large amounts of MCDP inhibits the release of histamine by mast cells (15). It also blocks fast activation and slowly inactivating K+ channels, resulting in neuronal hyperexcitability (30). The biogenic amines present in honeybee venom are histamine, dopamine, and noradrenaline. Histamine, which amount present in the venom is smaller than that released by MCDP, promotes vasodilation enhancing the inflammation, whereas noradrenaline and dopamine have a well-known ionotropic effect (10,13,15). The ingestion of biogenic amines present in the honeybee venom sac probably does not represent a significant toxicological risk, because these amines are present in amounts that are high enough to impact human health (31,32). Honeybees can sting only once, leaving the stinger and venom sac in the sting site. A sting by one or a few honeybees promotes a reaction at the sting site that begins quickly and is characterized by pain, edema, and erythema. These local effects usually last for hours, but may, in some cases, continue for days. Multiple honeybee stings (a minimum of one hundred) are capable of promoting a systemic toxic reaction, characterized by agitation, vomiting, diarrhea, difficulty breathing, seizures, hyperthermia, and shock. Other clinical effects of the systemic toxic reaction are rhabdomyolysis and heart failure. In addition, sensitized individuals, who have been previously exposed to honeybee venom, may exhibit an anaphylactic reaction after only a single sting (33,34). A rare effect promoted is myocardial infarction, which usually occurs after multiple honeybee stings (35)(36)(37)(38), but has also been observed as a result of only one sting (39). It is likely that myocardial infarction is caused by the spasm or thrombosis of the coronary arteries (39,40) or is secondary to the hypersensitivity reaction (38). It has been estimated that the amount of venom from honeybees that is lethal to 50% of humans by injection is 2.8 mg venom per kg of body weight. As a honeybee yields about 160 µg of venom (11), this amount is insufficient to cause detectable toxic effects in a person who has only ingested a single insect. In addition, as reported, various components of the venom are destroyed by gastric secretion. In addition, honeybee venom used for medical purposes is administered only by injection (41)(42)(43), rather than orally, most likely because it would lose its activity owing to degradation by the digestive system. Remarkably, honeybee venom can cause allergic reactions in sensitized individuals (44)(45)(46). It has been found that even residual amounts of honeybee venom in honey can induce an allergic reaction, which is a very rare condition (29,47). These allergic reactions are triggered mainly by the peptide melittin and the enzymes phospholipase A 2 , A 1 , and hyaluronidase. Allergic reactions to poison can be identified by the production of specific IgE and IgG4 antibodies in the serum of patients (48). Thus, although it is a rare condition, there is the risk that the ingestion of a honeybee may induce an allergic reaction to some venom components in sensitive individuals. In addition, the structural proteins of the honeybee itself may also induce an allergic reaction (47). POLLEN Honeybees are pollinating insects that collect pollen grains when visiting the flowers used for their food. The pollen grains collected by honeybees are agglutinated and transported to their colony in structures present in the hind legs named corbicles or pollen baskets. In addition, pollen grains also stick to the bee's body (49)(50)(51). In addition, to provide nutrients to bees, pollen serves as a source of enzymes that aid in the digestion of nutrients, such as beta-galactosidase, and helps to establish the beneficial digestive microbiota of these insects (52). For humans, pollen is a bee product known for its pharmacological activities, which include antimicrobial, anti-viral, antiinflammatory, immunostimulatory, and antioxidant effects (53,54). The ingestion of pollen collected by honeybees may be responsible for the development of acute allergic reactions, including anaphylaxis, in sensitized individuals. Honeybee secretions probably do not significantly reduce the allergenic potential of the collected pollen (55). Although relatively rare, allergic reactions to pollen can be quite severe, even lethal (55)(56)(57)(58)(59)(60). This reaction occurs after previous exposure to the compound that causes the reaction in the sensitized individual that usually does not occur at first exposure. A large number of plant species may cause allergic reactions to pollen ( Table 2). Thus, the pollen from several plant species that is collected by honeybees can cause allergic reactions in humans (60). Importantly, the same patient may have allergic reactions to pollen from more than one plant species simultaneously (58,61,62). In addition, patients who have allergic reactions to pollen may not be hypersensitive to honeybee venom components (61). The allergic reactions to pollen occur after previous sensitization to their allergens. Pollen allergens are trapped and processed by dendritic cells that migrate to lymph nodes and induce the differentiation of naïve T helper cells into Th2 cells. The contact of epithelial barrier organs to pollen allergens can induce epithelial cells to release interleukin (IL) 25, IL-33 thymic stromal lymphopoietin. These factors re-activate Th2 cells that release IL-4, IL-5, and IL-9. IL-4 stimulates B cells to produce and release antigen-specific IgEs, whereas IL-5 activates eosinophils. Furthermore, IL-4 and IL-9 promote mast cell degranulation, releasing a number of compounds, including histamine, leukotrienes, cytokines, and chemotactic molecules, resulting in the clinical signs of allergic reaction (74). A honeybee can carry more than 15 mg of pollen (75). In addition, pollen is also present within the digestive tract of a bee; a study evaluating two hives found that each honeybee contained, on average, 3.35 and 4.27 mg (76). It was found that one gram of pollen can contain between 400,000 and 6.4 million pollen grains (57). Patients exhibiting an allergic reaction after pollen consumption had a positive skin sensitivity test to 0.1 mg/mL pollen extracts (53,54). Thus, pollen carried by a bee may cause an allergic reaction in a sensitized individual. Pollen may contain toxic substances produced by plants (77)(78)(79)(80)(81), notably, pyrrolizidine alkaloids (82)(83)(84)(85)(86). These alkaloids have potent hepatotoxic effects and cytotoxic, genotoxic, and oncogenic activities; some compounds also have neurotoxic and nephrotoxic effects (87,88). A study in Germany revealed that a total of 17 out of 55 pollen samples collected by honeybees and marketed in Europe contained detectable levels of pyrrolizidine alkaloids, with concentrations ranging from 1.08 to 16.35 µg/g (85). On the assumption of the volume of pollen carried by a honeybee as 15 mg (76), the amount of pyrrolizidine alkaloids ranging from 16.2 to 245.25 ng. As the ingestion of pyrrolizidine alkaloids up to 0.1 µg/kg per day is considered safe for humans (84,86), the probable amounts found in a honeybee should not pose any toxicological risk. Pollen may also contain pesticide residues used in agriculture (89)(90)(91)(92). The highest residual pesticide concentration found in pollen collected by honeybees in a study conducted in the United States was 16.556 µg/g of the pesticide phosmet (89). Again, assuming the volume of pollen carried by a bee as 15 mg (75), this would be a maximum phosmet concentration of 248.34 ng, which would not pose a toxicological risk to humans as the acceptable daily intake for this compound has been set at 5 µg/kg body weight (93). CONCLUSIONS The amount of venom present in a honeybee is considered insufficient to cause detectable toxic effects on a person who has accidentally ingested it; moreover, components of the honeybee venom are destroyed by gastric secretion. In contrast, despite the rarity, there is a risk of honeybee ingestion, causing an allergic reaction to some component of the venom in sensitized individuals. In addition, pollen carried by a honeybee may cause an allergic reaction in a sensitized individual. Thus, the accidental ingestion of a honeybee present in food does not carry a risk for the production of toxic effects for the majority of the population but may promote allergic reactions in susceptible individuals. AUTHOR CONTRIBUTIONS BS-B conceived the paper. JM and BS-B wrote the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
v3-fos-license
2023-03-12T15:18:29.163Z
2023-03-01T00:00:00.000
257439813
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/16/6/2226/pdf?version=1678430040", "pdf_hash": "84628d4f1458b47524e444bdf0a0bac0f96cd1c5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1320", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "a83b52701b91c94ca4552da90d20f9a5d4d4e975", "year": 2023 }
pes2o/s2orc
Structural, Optical, Electric and Magnetic Characteristics of (In1−xGdx)2O3 Films for Optoelectronics After (In1−xGdx)2O3 powder with a wide x range of 0 to 10 at.% was chemically produced, (In1−xGdx)2O3 thin films were evaporated under ultra-vacuum using an electron beam apparatus. We investigated the influence of the Gd doping concentration on the magnetic, optical, electrical, and structural properties of the resultant In2O3 deposits. The produced Gd-doped In2O3 films have a cubic In2O3 structure without a secondary phase, as shown by the X-ray diffraction results. Additionally, the chemical analysis revealed that the films are nearly stoichiometric. A three-layer model reproduced the spectroscopic ellipsometer readings to determine the optical parameters and energy gap. The Egopt changed toward the lower wavelength with growing the Gd doping in (In1−xGdx)2O3 films. The Egopt in the (In1−xGdx)2O3 films was observed to increase from 3.22 to 3.45 eV when the Gd concentration climbed. Both carrier concentration and hall mobility were found during the Hall effect studies. It was possible to construct the heterojunction of Ni (Al)/n-(In1−xGdx)2O3/p-Si/Al. At voltages between −2 and 2 volts, investigations into the dark (cutting-edge-voltage) characteristics of the produced heterojunctions were made. The oxygen vacancies and cationic defects in the lattice caused by the uncompensated cationic charges resulted in significant magnetism and ferromagnetic behavior in the undoped In2O3 films. The (In1−xGdx)2O3 films, however, displayed faint ferromagnetism. The ferromagnetism seen in the (In1−xGdx)2O3 films was caused by oxygen vacancies formed during the vacuum film production process. Metal cations created ferromagnetic exchange interactions by snatching free electrons in oxygen. Introduction Early advancements in diluted magnetic semiconducting (DMS) materials [1][2][3] were mostly based on II and IV semiconductors, such as CdSe and ZeSe, where the positive ion's valence was 3+ and magnetic ions with this valence, such as Mn, were doped. However, the difficulty in converting these materials into positive-and negative-type semiconductors makes them less desirable for use. GaAs, a recently utilized III-V semiconductor, have been doped with the appropriate magnetic ions (Mn) impurity to enhance DMSs [4]. At ambient temperatures, these DMSs have no magnetic properties and low Curie temperatures (200 K). They are therefore inappropriate for applications that require room temperature. Fortunately, oxide semiconductors may help to bridge this gap. Recent research has demonstrated that ferromagnetism is seen in semiconductors with magnetic ion doping, such as zinc oxide [5], cerium oxide [6], tin oxide [7], and indium oxide [8], even at room temperature. Due to its broadband gap and n-type semiconductor characteristics, indium oxide (In 2 O 3 ) is a prime candidate among these oxides of semiconductor. Optoelectronic industries make heavy use of In 2 O 3 [9]. The ferromagnetism that conducts In 2 O 3 might add more bristles to the hat. Doping the In 2 O 3 matrix lattice with a variety of magnetic ions, Details of the Experiment The polycrystalline composites (In 1−x Gd x ) 2 O 3 with x = 0, 2, 4, 6, 8, and 10 at.% were created by combining analytical powders of Gd 2 O 3 and In 2 O 3 (Sigma-Aldrich purity 99.998%) using the following equation: The granules were chaotically mixed in a robotic pestle and mortar at RT for 45 min. This mixture was subsequently compressed into disc-shaped grains that were used as the base for films. A clean Corning #1022 glass substrate was covered in (In 1−x Gd x ) 2 O 3 thin films using an electron beam gun evaporation (Edward 306Auto) deposition system in a high vacuum environment (see Figure 1). Before evaporation, we fill the graphite boat with (In 1−x Gd x ) 2 O 3 powder, and a typical vacuum of about comparable to 9 × 10 −7 mbar was applied. The substrate temperature was maintained at 140 • C while the deposition rate was held at 2 nm/min in order to increase film adhesion. With the use of Cu-K1 radiation from a Philips X-ray diffractometer (type X'pert), the phase of (In 1−x Gd x ) 2 O 3 films was estimated. Pure silicon with a purity of 99.9999 was used to calibrate the XRD apparatus. The fundamental makeup of the films was examined using an X-ray spectrometer (EDXS) and a SEM (JOEL XL) at a voltage of 30 kV. Using a compensator (J-A-Woollam, M-2000, QDI: Darmstadt, Germany) in the wavelength range of 300 to 1100 nm, the ellipsometry parameters of the (In 1−x Gd x ) 2 O 3 layer were measured. A 70 • angle was used to collect data on ellipsometry. Using the WVASE32-program from J-A-Woollam Corporation, detailed modeling was performed to determine the optical constants of the (In 1−x Gd x ) 2 O 3 films. The transmission of the films was evaluated using a JASCO-670 UV-Vis-NIR spectrophotometer. The Van der Pauw method and a Hall effect measurement method were utilized to evaluate the electrical properties of (In 1−x Gd x ) 2 O 3 films (HMS-5000, ECOPIA, Gyeonggi, Korea). On a 1 cm 2 glass substrate, resistivity, mobility, carrier type, and carrier concentration were all assessed. The produced device's current density-voltage (J-V) was conventionally measured using Keithley 610 and 617 voltage source and current meters to determine the current density across the heterojunction at various Gd contents. The dark (current densityvoltage) characteristics were measured at room temperature in a fully dark environment. Finally, a vibrating sample magnetometer model was used to examine the films' magnetic characteristics (VSM model-9600M-1, Lowell, MA, USA). At RT, magnetic measurements were made. Materials 2023, 16 X-ray Diffraction and Morphology The measured XRD pattern of the In2O3 powder is shown in Figure 2a as described by the strength of the peak at a particular angle. The picture displays the peak positions that the X'Pert-HighSore program was able to collect following code 06-0416. Figure 2b shows the XRD patterns of the (In1−xGdx)2O3 films with x = 0, 2, 4, 6, 8, and 10 at.%. According to Figure 2b, the strength of the peak at a specific angle best describes the measured XRD pattern of the In2O3 powder. This figure shows the peaks that the X'Pert-HighSore program was able to record according to code 06-0416. The XRD patterns of the (In1−xGdx)2O3 films with x = 0, 2, 4, 6, 8, and 10 at.% are shown in Figure 2b. All films have a clear focus (222). JCPDS card No. 00-006-0416 states that the characteristic diffraction peak of an In2O3 film at 2θ = 30.57, which has a preferred placement in the (222) plane, confirms the existence of a cubic structure. The films' XRD did not reveal the presence of Gd2O3. The XRD peaks gradually decrease as more Gd2O3 is added to the film, which is brought on by a decrease in crystallinity. The XRD peaks with the indication (222) are transferred to the higher diffraction angle and are magnified in Figure 2c due to the higher ionic radii of Gd (1.05) than of In (0.94). In accordance with [15,16], the lattice constant "a", the plane's characteristic (hkl), and the interplanar spacing (dhkl) have the following relationships: Using Bragg's law, the spacing dhkl is related to the Bragg's diffraction angle θ as 2 sin As the concentration of Gd increases, the estimated lattice parameters a = b = c shrink and are in good agreement with the JCPDS data. The decrease in lattice parameters could be attributed to the variation in the ionic radii of Gd and In, which causes a lattice deformation. X-ray Diffraction and Morphology The measured XRD pattern of the In 2 O 3 powder is shown in Figure 2a as described by the strength of the peak at a particular angle. The picture displays the peak positions that the X'Pert-HighSore program was able to collect following code 06-0416. Figure 2b shows the XRD patterns of the (In 1−x Gd x ) 2 O 3 films with x = 0, 2, 4, 6, 8, and 10 at.%. According to Figure 2b, the strength of the peak at a specific angle best describes the measured XRD pattern of the In 2 O 3 powder. This figure shows the peaks that the X'Pert-HighSore program was able to record according to code 06-0416. The XRD patterns of the (In 1−x Gd x ) 2 O 3 films with x = 0, 2, 4, 6, 8, and 10 at.% are shown in Figure 2b. All films have a clear focus (222). JCPDS card No. 00-006-0416 states that the characteristic diffraction peak of an In 2 O 3 film at 2θ = 30.57, which has a preferred placement in the (222) plane, confirms the existence of a cubic structure. The films' XRD did not reveal the presence of Gd 2 O 3 . The XRD peaks gradually decrease as more Gd 2 O 3 is added to the film, which is brought on by a decrease in crystallinity. The XRD peaks with the indication (222) are transferred to the higher diffraction angle and are magnified in Figure 2c due to the higher ionic radii of Gd (1.05) than of In (0.94). In accordance with [15,16], the lattice constant "a", the plane's characteristic (hkl), and the interplanar spacing (d hkl ) have the following relationships: Using Bragg's law, the spacing d hkl is related to the Bragg's diffraction angle θ as As the concentration of Gd increases, the estimated lattice parameters a = b = c shrink and are in good agreement with the JCPDS data. The decrease in lattice parameters could be attributed to the variation in the ionic radii of Gd and In, which causes a lattice deformation. Crystal Size and Lattice Strain For (In1−xGdx)2O3 films, the average crystallite size D is calculated using Scherer's equation [17,18] The average crystallite size of (In1−xGdx)2O3 films dropped as the concentration of Gd dopant increased because the rise of Gd ions caused the In2O3 matrix to deform. The nucleation and growth rates of Gd-doped In2O3 films may be constrained as a result. The calculated values for crystallite size are shown in Table 1. The following equation is used to determine the dislocation density (δ) of the films based on the experiment's findings [19]: The worth of the film and its defect structure are determined by the dislocation density of the films. The In2O3 matrix is given Gd doping in this study, and as a result, the density of dislocations rapidly increases, exposing the structure of the imperfections. The Stoke and Wilson equation [19,20] can be used to compute the lattice strain (ε). Crystal Size and Lattice Strain For (In 1−x Gd x ) 2 O 3 films, the average crystallite size D is calculated using Scherer's equation [17,18] The average crystallite size of (In 1−x Gd x ) 2 O 3 films dropped as the concentration of Gd dopant increased because the rise of Gd ions caused the In 2 O 3 matrix to deform. The nucleation and growth rates of Gd-doped In 2 O 3 films may be constrained as a result. The calculated values for crystallite size are shown in Table 1. The following equation is used to determine the dislocation density (δ) of the films based on the experiment's findings [19]: The worth of the film and its defect structure are determined by the dislocation density of the films. The In 2 O 3 matrix is given Gd doping in this study, and as a result, the density of dislocations rapidly increases, exposing the structure of the imperfections. The Stoke and Wilson equation [19,20] can be used to compute the lattice strain (ε). β can be corrected by the subsequent association. where β obs is the peak width of the film, and β std is the standard peak width (single crystal silicon). The films of (In 1−x Gd x ) 2 O 3 with (x = 0, 2, 4, 6, 8, and 10 at.%), as well as the crystallite size and lattice strain, are shown in Figure 3. The lattice strain increases as the Gd inclusion increases and the crystallite size decreases. The ionic radius of Gd is considerably higher than that of In, which leads in a reduction in crystalline size and an increase in lattice strain. (5) β can be corrected by the subsequent association. where is the peak width of the film, and is the standard peak width (single crystal silicon). The films of (In1−xGdx)2O3 with (x = 0, 2, 4, 6, 8, and 10 at.%), as well as the crystallite size and lattice strain, are shown in Figure 3. The lattice strain increases as the Gd inclusion increases and the crystallite size decreases. The ionic radius of Gd is considerably higher than that of In, which leads in a reduction in crystalline size and an increase in lattice strain. The observed decrease in D and increase in Gd may also be attributed to two additional factors: first, the combined effect of the host In2O3 crystal's lattice distortion brought on by the substitution of higher ion-sized Gd atoms [21]; and second, the surface of the doped samples was claimed to have developed a thin Gd-O-In layer as a result of too many In ions in the precipitation solution inhibiting crystal formation [21,22]. The observed decrease in D and increase in Gd may also be attributed to two additional factors: first, the combined effect of the host In 2 O 3 crystal's lattice distortion brought on by the substitution of higher ion-sized Gd atoms [21]; and second, the surface of the doped samples was claimed to have developed a thin Gd-O-In layer as a result of too many In ions in the precipitation solution inhibiting crystal formation [21,22]. in the EDAX spectrum, however (In0.98Gd0.02)2O3 clearly shows modest Gd signals due to the rising Gd concentration. Additionally, the EDXS investigation's findings demonstrated that the composition is nearly stoichiometric. SEM pictures of the In2O3 and (In0.90Gd0.10)2O3 thin films created on glass substrates are shown in Figure 5a,b. The pictures show both films' high adhesion and thick, erratic structure. In terms of the histograms (in Figure 6c,d of Figure 5a,b, the average grain size distribution was calculated using Gauss curve fitting and was discovered to be 40 nm and 25 nm for x = 0.0 and 0.1, respectively, as shown in the histogram of Figure 5a,b. These numbers are higher than the XRD-measured crystallite size. The XRD measurement revealed coherent X-ray diffraction at the crystal areas. However, the SEM examination's grain size was established between the grain boundaries. AFM enables the acquisition of a quantitative validation of the film's structural issues. Standard 3D dimension AFM scan pictures are displayed in Figure 6a-c for the In2O3, (In0.94Gd0.06), and (In0.90Gd0.10)2O3 thin films, respectively. The small shapes appearing in Figure 6 may be attributed to the presence of the doping Gd in addition to the matrix material In2O3. The RMS is calculated as the root mean square of the measured surfaces of the microscopic peaks and valleys, i.e., it represents the root mean square value of ordinate values within the defined area. It is comparable to the height of the standard deviation. The peaks and valleys of the rooftops are measured individually for each value, but the measurements are applied to a separate formula. Examining the calculations reveals that the RMS value is impacted by a single significant peak or fault inside the microscopic surface texture. In reference [22], more information regarding the trend of the RMS values derived by the AFM image is provided. The XRD study size and the grain size estimated by AFM follow a similar pattern, however, the computed value is greater than the predicted crystal size. These root mean square (RMS) values for film roughness were obtained from the evaluation: 5.47, 5.41, and 5.32 nm. Figure 5a,b. The pictures show both films' high adhesion and thick, erratic structure. In terms of the histograms (in Figure 6c,d of Figure 5a,b, the average grain size distribution was calculated using Gauss curve fitting and was discovered to be 40 nm and 25 nm for x = 0.0 and 0.1, respectively, as shown in the histogram of Figure 5a,b. These numbers are higher than the XRD-measured crystallite size. The XRD measurement revealed coherent X-ray diffraction at the crystal areas. However, the SEM examination's grain size was established between the grain boundaries. AFM enables the acquisition of a quantitative validation of the film's structural issues. Standard 3D dimension AFM scan pictures are displayed in Figure 6a The RMS is calculated as the root mean square of the measured surfaces of the microscopic peaks and valleys, i.e., it represents the root mean square value of ordinate values within the defined area. It is comparable to the height of the standard deviation. The peaks and valleys of the rooftops are measured individually for each value, but the measurements are applied to a separate formula. Examining the calculations reveals that the RMS value is impacted by a single significant peak or fault inside the microscopic surface texture. In reference [22], more information regarding the trend of the RMS values derived by the AFM image is provided. The XRD study size and the grain size estimated by AFM follow a similar pattern, however, the computed value is greater than the predicted crystal size. These root mean square (RMS) values for film roughness were obtained from the evaluation: 5.47, 5.41, and 5.32 nm. Spectroscopic Ellipsometry for Measuring Optical Parameters The refractive index (n) and absorption index (k) of the (In1−xGdx)2O3 with (x = 0, 2, 4, 6, 8, 10 at.%) films have been extracted by spectroscopic ellipsometry SE. The relationship between these films' microstructure and optical characteristics and the SE gaining parameters ψ and Δ and is shown in the table below [23,24]. where rs and rp stand for the relative Fresnel coefficients of reflection from the film layer. The measurements of ψ and Δ on the (In1−xGdx)2O3 films at a 70° incidence angle are shown in Figure 7a,b. In order to calculate the films' n, k, and film thickness d, a threelayer optical model was utilized. The model consists of three layers: the substrate, the "Bspline" (In1−xGdx)2O3 layer, and the rough layer. Changes in ψ and Δ were fit using the mean root square (MRS) function and least square regression as follows: Spectroscopic Ellipsometry for Measuring Optical Parameters The refractive index (n) and absorption index (k) of the (In1−xGdx)2O3 with (x = 0, 2, 4, 6, 8, 10 at.%) films have been extracted by spectroscopic ellipsometry SE. The relationship between these films' microstructure and optical characteristics and the SE gaining parameters ψ and Δ and is shown in the table below [23,24]. where rs and rp stand for the relative Fresnel coefficients of reflection from the film layer. The measurements of ψ and Δ on the (In1−xGdx)2O3 films at a 70° incidence angle are shown in Figure 7a,b. In order to calculate the films' n, k, and film thickness d, a threelayer optical model was utilized. The model consists of three layers: the substrate, the "Bspline" (In1−xGdx)2O3 layer, and the rough layer. Changes in ψ and Δ were fit using the mean root square (MRS) function and least square regression as follows: Spectroscopic Ellipsometry for Measuring Optical Parameters The refractive index (n) and absorption index (k) of the (In 1−x Gd x ) 2 O 3 with (x = 0, 2, 4, 6, 8, 10 at.%) films have been extracted by spectroscopic ellipsometry SE. The relationship between these films' microstructure and optical characteristics and the SE gaining parameters ψ and ∆ and is shown in the table below [23,24]. where r s and r p stand for the relative Fresnel coefficients of reflection from the film layer. The measurements of ψ and ∆ on the (In 1−x Gd x ) 2 O 3 films at a 70 • incidence angle are shown in Figure 7a,b. In order to calculate the films' n, k, and film thickness d, a three-layer optical model was utilized. The model consists of three layers: the substrate, the "B-spline" (In 1−x Gd x ) 2 O 3 layer, and the rough layer. Changes in ψ and ∆ were fit using the mean root square (MRS) function and least square regression as follows: Figure 8a,b depicts the modeled spectrum dependency of ψ and Δ for (In0.90Gd0.10)2O3 films and it shows a superior fit with the measured data (symbols) acquired across the whole range. The average thickness for all films is 200 ± 1.23 nm, while the average surface roughness is 5.67 ± 0.13 nm. Surface roughness obtained through SE can match that of a standard 3D dimensional AFM scan image in terms of quality. More information on SE execution and its conceptual B-spline optical model is available in reference [23]. In Figure 9a,b, respectively, the computed values of n and k for the (In1−xGdx)2O3 films are displayed. These two graphs demonstrate how the n and k rapidly decrease as the concentration of Gd2O3 rises. The (In1−xGdx)2O3 layer's absorption coefficient and absorption index are connected by the equation (k = αλ/4π). The peak width of the refractive index (700 < λ < 800) in Figure 9a is determined by the transmission of light via nano-holes and the negative phase shift brought on by surface plasmons (SP) scattering at the interfaces between the nano-hole and the substrate. SPs, which are collective oscillations of metal-free electrons trapped at metal-dielectric interfaces and stimulated by an electromagnetic field that is incident on them, are confined in the metal surface. The multiple optical resonance peaks that appear as a shoulder at 750 Figure 8a,b depicts the modeled spectrum dependency of ψ and ∆ for (In 0.90 Gd 0.10 ) 2 O 3 films and it shows a superior fit with the measured data (symbols) acquired across the whole range. The average thickness for all films is 200 ± 1.23 nm, while the average surface roughness is 5.67 ± 0.13 nm. Surface roughness obtained through SE can match that of a standard 3D dimensional AFM scan image in terms of quality. More information on SE execution and its conceptual B-spline optical model is available in reference [23]. Figure 8a,b depicts the modeled spectrum dependency of ψ and Δ for (In0.90Gd0.10)2O3 films and it shows a superior fit with the measured data (symbols) acquired across the whole range. The average thickness for all films is 200 ± 1.23 nm, while the average surface roughness is 5.67 ± 0.13 nm. Surface roughness obtained through SE can match that of a standard 3D dimensional AFM scan image in terms of quality. More information on SE execution and its conceptual B-spline optical model is available in reference [23]. In Figure 9a,b, respectively, the computed values of n and k for the (In1−xGdx)2O3 films are displayed. These two graphs demonstrate how the n and k rapidly decrease as the concentration of Gd2O3 rises. The (In1−xGdx)2O3 layer's absorption coefficient and absorption index are connected by the equation (k = αλ/4π). The peak width of the refractive index (700 < λ < 800) in Figure 9a is determined by the transmission of light via nano-holes and the negative phase shift brought on by surface plasmons (SP) scattering at the interfaces between the nano-hole and the substrate. SPs, which are collective oscillations of metal-free electrons trapped at metal-dielectric interfaces and stimulated by an electromagnetic field that is incident on them, are confined in the metal surface. The multiple optical resonance peaks that appear as a shoulder at 750 When we schemed (αhν) x vs. (hν) for (In1−xGdx)2O3 films with different Gd contents, a direct transition was confirmed. The outcomes are shown in Figure 9c, which represents the intersection of the linear component's extended linear fit and the energy axis. As can be observed in Figure 10, the findings demonstrate that the obtained The peak width of the refractive index (700 < λ < 800) in Figure 9a is determined by the transmission of light via nano-holes and the negative phase shift brought on by surface plasmons (SP) scattering at the interfaces between the nano-hole and the substrate. SPs, which are collective oscillations of metal-free electrons trapped at metal-dielectric interfaces and stimulated by an electromagnetic field that is incident on them, are confined in the metal surface. The multiple optical resonance peaks that appear as a shoulder at 750 nm are caused by the coupling and decoupling process between the SP resonance evanescent waves and the incident light through the nano-hole. Reference [24] provides additional information on SPs and plasmon resonance. The refractive index and extinction coefficient for all samples decreases as the wavelength increases, as seen in Figure 9a,b. Light scattering and the decline in absorbance are the causes of this phenomenon. The refractive index and extinction coefficient in the visible region decreases as the Gd content increases. The extinction coefficient value in Figure 9b is rather high. This demonstrates the substantial dielectric loss of the Gd-doped indium oxide thin films. The polycrystallanity of the films was indicated by ripples (interference patterns) in the extinction coefficient spectrum in the wavelength range of 400 nm to 800 nm. The nano-hole form and nano-hole periodicity allow for exact control of the transmission wavelength location and intensity. For instance, the contribution from the structural margins becomes increasingly substantial in short-range systems with few holes, resulting in unique emission patterns. The optical transitions of the materials being studied in the high absorption zone (α ≥ 10 4 ) are provided by the Tauc equation illustrated below [25,26]. where K is known as the Tauc parameter which represents the degree of disorder in the materials and depends on the transition probability, x is the super index controlled by the transition type that controls optical absorption, and E opt g is the energy gap. When we schemed (αhν) x vs. (hν) for (In 1−x Gd x ) 2 O 3 films with different Gd contents, a direct transition was confirmed. The outcomes are shown in Figure 9c, which represents the intersection of the linear component's extended linear fit and the energy axis. As can be observed in Figure 10, the findings demonstrate that the obtained E opt g values rise as the Gd concentration does. As the amount of Gd2O3 rises as a result of the continual substitution of In atoms with Gd, the fundamental band gap moves to the blue. This increase in the carrier electron injection results in the Burstein-Moss effect, which shifts the Fermi level and combines into covalent bands [27]. The ZnO crystal develops more interstitial oxygen impurities when the doping concentration is increased, which results in more delocalized states, a smaller band gap, and a lower energy gap. The measured transmittance (T) of films formed of (In1−xGdx)2O3 covering the 300 to 1100 nm spectral region is shown in Figure 11. High-transparency films are needed for optical devices as the transmittance increases in comparison to In2O3 films as the Gd doping level rises [27,28]. The more oxygen distributed across the coating is what causes the increase in transmittance [29][30][31]. The (T) trend of the (In1−xGdx)2O3 films in the strongly absorbing area, where the transmittance changes to blue with increasing Gd concentration, is shown in the inset of Figure 11. As a result, the energy gap expands, as seen in Figure 11. As the amount of Gd 2 O 3 rises as a result of the continual substitution of In atoms with Gd, the fundamental band gap moves to the blue. This increase in the carrier electron injection results in the Burstein-Moss effect, which shifts the Fermi level and combines into covalent bands [27]. The ZnO crystal develops more interstitial oxygen impurities when the doping concentration is increased, which results in more delocalized states, a smaller band gap, and a lower energy gap. The measured transmittance (T) of films formed of (In 1−x Gd x ) 2 O 3 covering the 300 to 1100 nm spectral region is shown in Figure 11. High-transparency films are needed for optical devices as the transmittance increases in comparison to In 2 O 3 films as the Gd doping level rises [27,28]. The more oxygen distributed across the coating is what causes the increase in transmittance [29][30][31]. The (T) trend of the (In 1−x Gd x ) 2 O 3 films in the strongly absorbing area, where the transmittance changes to blue with increasing Gd concentration, is shown in the inset of Figure 11. As a result, the energy gap expands, as seen in Figure 11. optical devices as the transmittance increases in comparison to In2O3 films as the Gd doping level rises [27,28]. The more oxygen distributed across the coating is what causes the increase in transmittance [29][30][31]. The (T) trend of the (In1−xGdx)2O3 films in the strongly absorbing area, where the transmittance changes to blue with increasing Gd concentration, is shown in the inset of Figure 11. As a result, the energy gap expands, as seen in Figure 11. Electric Properties Using a standard four-point probing technique, the electrical characteristics of the (In 1−x Gd x ) 2 O 3 layer films with different Gd contents were investigated. R s = 4.53.V/I [Ω/sq], where V is the voltage in volts, I is the current in amperes, and the number 4.53 is the correction constant, is the equation used to calculate sheet resistance [32,33]. In accordance with the equation ρ = R s d, the relationship between resistivity (measured in ohm cm) and R s is accurate if the film thickness is d [34]. As the Gd content increases, the resistivity decreases, as shown in Figure 12. The measurements show that the (In 1−x Gd x ) 2 O 3 films are n-type materials. The figure displays the measured electrical properties of the prepared (In 1−x Gd x ) 2 O 3 thin films as a function of Gd concentration. It is obvious that the concentration, n, and mobility of the carriers alter as the Gd content grows. For both carrier concentrations, the mobility is optimum at a Gd concentration of 8 at.% and becomes approximately fixed at 10 at.%. These results demonstrate that resistance decreases as carrier concentration rises. An increase in mobility is associated with a decrease in crystalline size and an increase in lattice strain. where V is the voltage in volts, I is the current in amperes, and the number 4.53 is the correction constant, is the equation used to calculate sheet resistance [32,33]. In accordance with the equation ρ = Rs d, the relationship between resistivity (measured in ohm cm) and Rs is accurate if the film thickness is d [34]. As the Gd content increases, the resistivity decreases, as shown in Figure 12. The measurements show that the (In1−xGdx)2O3 films are n-type materials. The figure displays the measured electrical properties of the prepared (In1−xGdx)2O3 thin films as a function of Gd concentration. It is obvious that the concentration, n, and mobility of the carriers alter as the Gd content grows. For both carrier concentrations, the mobility is optimum at a Gd concentration of 8 at.% and becomes approximately fixed at 10 at.%. These results demonstrate that resistance decreases as carrier concentration rises. An increase in mobility is associated with a decrease in crystalline size and an increase in lattice strain. Figure 13 displays the analyzed p-n junction diagram. It is important to note that the main elements affecting how the created p-n junction responds to reverse and forward bias employed in the vicinity of (−2 to 2 volts). The applied voltage affects the current Current Density versus Voltage for Ni (Al)/n − (In 1−x Gd x ) 2 O 3 /p-Si/Al Heterojunction Figure 13 displays the analyzed p-n junction diagram. It is important to note that the main elements affecting how the created p-n junction responds to reverse and forward bias employed in the vicinity of (−2 to 2 volts). The applied voltage affects the current density J, and the subsequent equation [35,36] provides the other diode production specifications: With applied voltage in the recommended ranges, Figure 14 shows the dark (J-V) characteristics of the produced diode in forward and reverse bias in (In1−xGdx)2O3 thin films on silicon substrate. It is obvious that up until a level of 8%, the current density rises along with the Gd content before beginning to significantly be fixed at a level of 10%. Figure 14a,b depict the relationship between the forward and reverse biases of the applied voltage in the dark and in the light, respectively. The forward bias voltage's current is higher than the reverse bias voltage's current (J-V). These figures show how an increase in the forward bias behavior of the solar cell results in an increase in the current density behavior, which significantly increases in the low voltage area. In the depletion zone, also known as the "low voltage region", the reverse current density of the examined produced p-n junction displays a weaker exponential behavior than the junction's forward current density does in the same region. As a result, it may be argued that the constructed p-n junction has amazing rectification qualities because as the resistivity falls, the Gd concentration rises. The measurements demonstrate the n-type nature of the (In1−xGdx)2O3 films. It is clear that when the Gd content rises, the carriers' concentration, n, and mobility change. The mobility is greatest for both carrier concentrations at a Gd concentration of 8 at.% and becomes roughly fixed at 10 at.%. These findings show that the resistance diminishes with increasing the carrier concentration. A decrease in the crystalline size and an increase in the lattice strain are related to an increase in mobility. For instance, (In0.92Gd0.08)2O3 films that are candidates for optoelectronic and solar cell applications have fair crystal light size, high conductivity, high carrier concentration, and carrier mobility. The electronic charge that equals (1.6 × 10 −19 C) and evaluates the Boltzmann's constant k B in this equation at room temperature (RT) represents the saturation current density and typifies the quality factor of the manufactured diode. With applied voltage in the recommended ranges, Figure 14 shows the dark (J-V) characteristics of the produced diode in forward and reverse bias in (In 1−x Gd x ) 2 O 3 thin films on silicon substrate. It is obvious that up until a level of 8%, the current density rises along with the Gd content before beginning to significantly be fixed at a level of 10%. Figure 14a,b depict the relationship between the forward and reverse biases of the applied voltage in the dark and in the light, respectively. The forward bias voltage's current is higher than the reverse bias voltage's current (J-V). These figures show how an increase in the forward bias behavior of the solar cell results in an increase in the current density behavior, which significantly increases in the low voltage area. In the depletion zone, also known as the "low voltage region", the reverse current density of the examined produced p-n junction displays a weaker exponential behavior than the junction's forward current density does in the same region. As a result, it may be argued that the constructed p-n junction has amazing rectification qualities because as the resistivity falls, the Gd concentration rises. The measurements demonstrate the n-type nature of the (In 1−x Gd x ) 2 O 3 films. It is clear that when the Gd content rises, the carriers' concentration, n, and mobility change. The mobility is greatest for both carrier concentrations at a Gd concentration of 8 at.% and becomes roughly fixed at 10 at.%. These findings show that the resistance diminishes with increasing the carrier concentration. A decrease in the crystalline size and an increase in the lattice strain are related to an increase in mobility. For instance, (In 0.92 Gd 0.08 ) 2 O 3 films that are candidates for optoelectronic and solar cell applications have fair crystal light size, high conductivity, high carrier concentration, and carrier mobility. Figure 15 displays the magnetic hysteresis loops of In2O3 powder, which represent the diamagnetic behavior at room temperature (a). Figure 15b displays the magnetic loops of powder samples of (In1−xGdx)2O3 with x = 2, 4, 6, 8, or 10 at.%. Small amounts of ferromagnetism are present in the samples of (In1−xGdx)2O3 with (x = 2, 4, 6, 8, or 10 at.%), and the magnetization increases with the magnetic field. It implies the presence of paramagnetic phases in Gd doped In2O3 powders. Our results unmistakably showed that there were no signs of ferromagnetic clusters or impurity phases. When the In3+ lattice site of the Gd3+ is occupied, a single-phase structure is produced. The oxygen vacancy has therefore very little probability of growing. Figure 16 displays the magnetization vs field-dependent observed at 300 K curves for the (In1−xGdx)2O3 and (x = 0, 2, 4, 6, 8, 10 at.%) films. A revised removal of the substrate contribution yielded the data for the magnetization films. At room temperature, soft ferromagnetic characteristics were present in all doped films. It is important to note that In2O3 films, even when undoped, display ferromagnetic properties and unique magnetization. However, in undoped semiconducting oxide films such as TiO2, ZnO, HfO2, and In2O3, ferromagnetism is present. This may be because of oxygen vacancies and cation defects in the lattice, which may be the cause of the uncompensated cation charge. As vacancies, spin splitting, and high spin states develop in this system, ferromagnetism and the electrons filling the oxygen vacancies engage in exchange [37]. Density functional theory (DFT) calculations on magnetism demonstrate that intrinsic point defects such as "O" vacancies and In interstitials serve as shallow donors while, in contrast, "O" gaps and In vacancies act as shallow acceptors [38]. Therefore, uncompensated cations caused by defects such as oxygen vacancies and grain boundary defects can be responsible for the ferromagnetic hysteresis loops observed in undoped In2O3 films. Additionally, at ambient temperatures, all Gd-doped In2O3 films display ferromagnetism, and the saturation magnetization changes nominally as the Gd doping concentration increases. As a result, we explain the observed ferromagnetism as the result of the creation of magnetopolarons as a result of oxygen vacancies and trapped electrons in Gd-doped In2O3 films, which leads to room temperature ferromagnetism [39]. The oxygen vacancies produced during film deposition may be the cause of the observed room temperature ferromagnetism in films. replying to Coey and colleagues. According to [40], oxygen vacancies in wide-band gap semiconductors will trap free electrons; these trapped electrons (F centers) will then act as an intermediary in magnetic exchange interactions with nearby metal cations. Figure 15 displays the magnetic hysteresis loops of In 2 O 3 powder, which represent the diamagnetic behavior at room temperature (a). Figure 15b displays the magnetic loops of powder samples of (In 1−x Gd x ) 2 O 3 with x = 2, 4, 6, 8, or 10 at.%. Small amounts of ferromagnetism are present in the samples of (In 1−x Gd x ) 2 O 3 with (x = 2, 4, 6, 8, or 10 at.%), and the magnetization increases with the magnetic field. It implies the presence of paramagnetic phases in Gd doped In 2 O 3 powders. Our results unmistakably showed that there were no signs of ferromagnetic clusters or impurity phases. When the In3+ lattice site of the Gd3+ is occupied, a single-phase structure is produced. The oxygen vacancy has therefore very little probability of growing. Figure 16 displays the magnetization vs field-dependent observed at 300 K curves for the (In 1−x Gd x ) 2 O 3 and (x = 0, 2, 4, 6, 8, 10 at.%) films. A revised removal of the substrate contribution yielded the data for the magnetization films. At room temperature, soft ferromagnetic characteristics were present in all doped films. It is important to note that In 2 O 3 films, even when undoped, display ferromagnetic properties and unique magnetization. Magnetic Characterization However, in undoped semiconducting oxide films such as TiO 2 , ZnO, HfO 2 , and In 2 O 3 , ferromagnetism is present. This may be because of oxygen vacancies and cation defects in the lattice, which may be the cause of the uncompensated cation charge. As vacancies, spin splitting, and high spin states develop in this system, ferromagnetism and the electrons filling the oxygen vacancies engage in exchange [37]. Density functional theory (DFT) calculations on magnetism demonstrate that intrinsic point defects such as "O" vacancies and In interstitials serve as shallow donors while, in contrast, "O" gaps and In vacancies act as shallow acceptors [38]. Therefore, uncompensated cations caused by defects such as oxygen vacancies and grain boundary defects can be responsible for the ferromagnetic hysteresis loops observed in undoped In 2 O 3 films. Additionally, at ambient temperatures, all Gd-doped In 2 O 3 films display ferromagnetism, and the saturation magnetization changes nominally as the Gd doping concentration increases. As a result, we explain the observed ferromagnetism as the result of the creation of magnetopolarons as a result of oxygen vacancies and trapped electrons in Gd-doped In 2 O 3 films, which leads to room temperature ferromagnetism [39]. The oxygen vacancies produced during film deposition may be the cause of the observed room temperature ferromagnetism in films. replying to Coey and colleagues. According to [40], oxygen vacancies in wide-band gap semiconductors will trap free electrons; these trapped electrons (F centers) will then act as an intermediary in magnetic exchange interactions with nearby metal cations. Additionally, numerous theoretical and experimental studies demonstrated that cationic vacancies are the root cause of ferromagnetism [41,42]. Additionally, numerous theoretical and experimental studies demonstrated that cationic vacancies are the root cause of ferromagnetism [41,42]. With increasing Gd concentration, (In1−xGdx)2O3 films become more ferromagnetic at (x = 2, 4, 6, 8, or 10 at.%). The magnetic exchange between Gd +3 and In 3+ surrounding the empty electron trap may be the cause of the enhanced ferromagnetism. Ferromagnetism results from the magnetic exchange of Gd 3+ pairs at the Gd 3+ -F core. As Gd content rises, the saturation magnetization rises as well. Conclusions Using a chemical reaction process, (In1−xGdx)2O3 powder with x = 0, 2, 4, 6, 8, and 10 at.% was produced. After that, thin layers were evaporated in a high vacuum with an electron gun. The impact of the Gd doping level on the films' structural, optical, and magnetic characteristics was examined. All of the Gd-doped In2O3 thin films showed the With increasing Gd concentration, (In 1−x Gd x ) 2 O 3 films become more ferromagnetic at (x = 2, 4, 6, 8, or 10 at.%). The magnetic exchange between Gd +3 and In 3+ surrounding the empty electron trap may be the cause of the enhanced ferromagnetism. Ferromagnetism results from the magnetic exchange of Gd 3+ pairs at the Gd 3+ -F core. As Gd content rises, the saturation magnetization rises as well. Conclusions Using a chemical reaction process, (In 1−x Gd x ) 2 O 3 powder with x = 0, 2, 4, 6, 8, and 10 at.% was produced. After that, thin layers were evaporated in a high vacuum with an electron gun. The impact of the Gd doping level on the films' structural, optical, and magnetic characteristics was examined. All of the Gd-doped In 2 O 3 thin films showed the cubic In 2 O 3 structure without any Gd-dopant impurity phases, according to the XRD data. Using an ellipsometric model, the optical constants of the (In 1−x Gd x ) 2 O 3 films were determined from the SE measurements. With increasing Gd 2 O 3 concentration, it was found that the entire spectrum range for both n and k decreased. This was attributed to the crystallinity contracting and growing lattice strain. With increasing Gd 2 O 3 content, the energy gap widened from 3.22 eV to 3.45 eV, resulting in a direct optical transition. This was explained by the effects of the increased lattice strain and the reduction in grain size from 26.4 to 13.2 nm. The electrical resistivity of the (In 1−x Gd x ) 2 O 3 films was demonstrated to decrease as more Gd was added. Conclusion: The (In 0.92 Gd 0.08 ) 2 O 3 film is one of the most promising materials for optoelectronic and solar cell applications since conductivity and transparency in the visible region both improve with increasing Gd concentrations. However, a thorough examination of the resulting forward and reverse biases has been conducted. When the bias is forward, the produced p-n junction or solar cell behaves differently, and this difference is most noticeable at low voltages. The growth and formation of the depletion area between the (In 1−x Gd x ) 2 O 3 layer and the Si substrate are thought to be the cause of the exponential behavior at low voltage. It was noted that because of oxygen vacancies and cation defects in the matrix, the undoped In 2 O 3 films display a ferromagnetic behavior with different magnetization. The Gd-doped In 2 O 3 thin films, on the other hand, showed RTFM. As the Gd concentration in (In 1−x Gd x ) 2 O 3 thin films increased, the ferromagnetic strength also increased. The ferromagnetic exchange between two neighboring Gd 3+ via trapped oxygen vacancies was identified as the source of the observed ferromagnetism. As a result, Gd-doped In 2 O 3 films are suitable candidates for spintronic device production at ambient temperatures.
v3-fos-license
2020-05-21T09:10:34.197Z
2020-05-15T00:00:00.000
219419704
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1088/1755-1315/474/5/052044", "pdf_hash": "b3f4391e9c746d8a442f3f46fea5a478319b258f", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1321", "s2fieldsofstudy": [ "Engineering" ], "sha1": "b2c3103c401a73b6d5093239260ba89ccb885dff", "year": 2020 }
pes2o/s2orc
Analysis on the Construction Strategy of Building Electrical Engineering with Intelligent Technology Nowadays, the increasing development of China’s society has promoted the rapid progress of the construction industry, further improved the level of construction technology, and made it present the characteristics of diversified development. At present, the internal connection between electrical engineering and intelligent technology is getting closer with the rising level of science and technology, and intelligent technology is used through the beginning and end of building electrical engineering. In order to further explore the construction strategy with intelligent technology of building electrical engineering, this paper analyzed the basic introduction, application characteristics, application status, technical key points and solutions strategies of building electrical engineering, attempting to improve the construction effect of building electrical engineering with intelligent technology. Introduction In the building electrical construction, electrical engineering is one of the important parts of the construction engineering, which is related to all aspects of the overall project, and the construction of building electrical engineering is mainly based on electrical equipment, device, etc. To some extent, the construction effect of building electrical engineering is directly related to the effect and progress of the overall construction project. Therefore, it is particularly important to strengthen the construction of building electrical engineering. Nowadays, the level of China's science and technology is growing day by day. In the construction of building electrical engineering, the rational application of intelligent technical means is conducive to improving the construction accuracy and level of electrical engineering. However, combined with the general laws of construction, the current intelligent technology of building electrical engineering still has some limitations in actual construction, which restricts the great development of China's building electrical engineering to a certain extent. Basic introduction of building electrical engineering At present, people's living standards are getting higher, and people have put forward new requirements and standards for buildings, especially the construction effects of building electrical engineering. With the rapid development of science and technology, the new technological means are also widely used in the construction of building electrical engineering. In order to ensure people's living safety, the units of building electrical engineering should pay attention to the application of intelligent technology and continuously optimize the level of intelligent technology, so as to improve the security and reliability of building electrical engineering [1] . In the construction of building electrical engineering, the main work is to install transformers, cable lines, lighting and lamps, lightning protection facilities, overhead lines, transformers and power units, etc. It can be seen that building electrical engineering and other Application characteristics of intelligent technology in building electrical engineering Through observation and research, it is found that the application of intelligent technology in building electrical engineering has a high value and significance. For example, the application of intelligent technology can promote the improvement of the construction effect of building electrical engineering and bring convenience to people in daily life. The application characteristics of the intelligent technology mainly include the following points. Flexibility As for the electrical controller, the inherent controller operation is very complicated and cumbersome, depending too much on the subjective consciousness of the operator. Therefore, human error is easy to occur in actual operation, and the application of intelligent technical means can be effective make up for this deficiency [2] . The utilization of intelligent technical means not only can improve the accuracy of construction in building electrical engineering, but also can reduce the workload of technical workers and improve their working efficiency, continuously enhancing the construction flexibility of electrical engineering. So, the task can be accomplished by means of intelligent technical means without the guidance of relevant technical workers. Consistency The application characteristics of consistency are mainly for the processing of different data. Through the use of intelligent technical means, unfamiliar data information is input to it, and the evaluation is carried out after the input is completed, so that it meets the automation standards of building electrical engineering. For different control subjects, the significance of their generation will also change. Although the intelligent technical means cannot make relevant control actions timely in a certain situation, it will produce the same effect. However, the operators are required to realize that if the controller unit is replaced, the expected effect will not be achieved. As a result, the principle of prudence must be upheld in the design work, and the processing of work details need to be optimized, to avoid mistakes in the work and minimize the errors. Security According to the research and investigation, it has been shown that the safety accidents caused by electrical systems in Chinese residence are increasing year by year, especially for the buildings with a long history, where the incidence of safety accidents induced by electrical systems is more frequent [3] . The root cause of the current problem is the low working efficiency and poor safety of the electrical system in the age-old buildings, and it is easy to increase the probability of safety accidents. Therefore, by the intelligent technical means, it can be found from the statistics that the occurrence probability of safety accidents has been significantly reduced. In the final analysis, because of the high sensitivity of intelligent technologies, once the residents operate improperly or the workers' operation is not EPPCT 2020 IOP Conf. Series: Earth and Environmental Science 474 (2020) 052044 IOP Publishing doi:10.1088/1755-1315/474/5/052044 3 standardized, the intelligent technology will act to detect it timely and control it to avoid safety risks, which makes the relevant controllers have reliability and security, so as to protect people's life, health and safety. The technical level of electrical construction is low Because intelligent technology is still at the stage of improvement and development in building electrical construction, and intelligent technology also involves new technical means. As a result, the application in the actual building electrical construction has a small effect, and the working efficiency of industrialized production is low, while the regional development is not balanced. The technical workers have not fully grasped the methods of intelligent technology and design management, which also leads to the low level of electrical construction technology. The electrical construction is not very standardized Nowadays, intelligent buildings are gradually emerging in our country. Most people have a high degree of recognition for intelligent buildings, promoting the widespread application of intelligent building design, which has strengthened the electrical construction technology of intelligent buildings to some extent [4] . However, due to the rapid development of intelligent technology, in the application of relevant systems and equipment, no clear specifications and standard instructions have been made on the related technology, which makes it prone to problems. The security of relevant controllers is improved With the extensive application of intelligent technology in the construction of building electrical engineering, it can obvious that the occurrence probability of safety accidents is declining. The root cause is the high sensitivity of intelligent technology, because of which, if the residents or the relevant workers operate improperly and other problems, the relevant device with intelligent technology will timely detect and control it, which can effectively avoid the occurrence of safety accidents, further improve the security and stability of the controller, and effectively guaranteed the safety of people's lives and property. Technical key points of wire laying Building construction workers need to skillfully understand and master the piping in construction drawing, should meet the design requirements in accordance with the construction drawings of electrical engineering, and cannot arbitrarily change them. They are supposed to make marks on the ends and branches of the cables, to ensure the cables arranged in order. Fixed points should be set at both ends and the turnings, and the spacing between the fixed points should be controlled at about 5-10 meters. In the vertical cable laying, it should consider the cable material to reasonably set fixed points [5] . It is worth noting that spare lines should be reserved in the cable laying, and it should pay attention to the anticorrosion of pipelines, ensure the connection of pipelines clear at a glance, do well to protect pipelines in the actual construction process, and keep the pipelines unobstructed. Technical key points of installing remote processors In order to maintain the unity of buildings, automatic control systems and each RPU communication facilities, it is necessary to use relative lines and adopt different RPU, so as to control RPU in the same system. RPU should be reasonably installed around or in the machine room, and in the application of control system of the air-conditioning unit, the overall level of remote processing and installation technology is improved by controlling the rest of output and input ports, lightings connected to the seat, Technical key points of BAS line installation In the process of line laying, the BAS system occupies a dominant position, and workers need to have a better understanding of the special pipelines of each part of the lines, such as the switch lines where the water level floats are located, the lines which the flowmeters are located and so on. Because the lines usually require the professional wires provided by manufacturers, it is necessary to combine the grounding parts of gateways, computers and other electronic equipment with the other weak points engineering, so that they can be grounded to the main line separately. Technical means of lightning protection grounding In the process of intelligent building design, it is necessary to be in line with the design drawings of the electrical installation, based on the actual situation of the construction, strengthen the focus on the welding of the baseplate, and mark the two main bars, to facilitate the inspection of relevant workers. Vigilance is needed when laying the ground down lead, and the ground down lead connected manually should be kept straight, without dead angle expected. The size of the flat steel interface is controlled to not more than 25 mm × 4 mm, and the diameter of the round steel must not be less than 12 mm. The test points are required to properly set at the ground down lead and at the height of 1.5-1.8 m above the ground, and the diameter of the bolt on the disconnecting clamp needs to be controlled within 10 mm. The metal protective tube for the ground down lead must be connected to the ground down lead to achieve electrical connectivity. Enhance the technical level of electrical construction In the construction of building electrical engineering, new technologies, new concepts and new methods should be reasonably applied, and construction coordination work should be done well to improve the effects of strong and weak currents professionally flowing into households and the reserved holes for pipelines. Aiming at the embedded elevators, suspenders, related bolts, relevant bolts, iron poles and basic steel of the accessory cabinet and so on, it should promote the improvement of workers' professional competence and professional level, and pays attention to the supervision of quality. Building electrical construction organization can promote the improvement of workers' comprehensive ability by professional skills training and lectures, and help workers master certain intelligent technical means, to accelerate the improvement of electrical construction technology and ensure the construction effect of building electrical engineering. Do well staggered construction In view of the construction problem of the joint cooperation in all work, it should be analyzed in advance to optimize the construction process, so as to improve the cooperation and coordination of all work. For example, during the construction of the electromagnetic shielding project, all aspects of the construction cannot be separated from the cooperation and coordination of various professions. If each link only focuses on the progress of their own engineering construction, it is bound to cause disputes and other problems. This requires the supervisors to fully control the progress of the construction, coordinate all professions well in the actual construction, and draw out an effective construction plan according to the actual construction and construction technology characteristics, to ensure the orderly construction of the building electrical engineering. Realize sharing through sensor technology In the construction of modern building electrical engineering, workers should use sensor technology reasonably to realize sharing of resource information. For example, based on traditional technology, the building electrical engineering department collects and summarizes the construction situation of the EPPCT 2020 IOP Conf. Series: Earth and Environmental Science 474 (2020) 052044 IOP Publishing doi:10.1088/1755-1315/474/5/052044 5 project, and uses computer technology to combine it with professional theoretical knowledge such as electrical equipment, electromagnetic fields and circuits, and carry out comprehensive analysis and research on the collected data and information. At the same time, relevant management workers can compare and study the results obtained with the original data and information, and automatically control the results appearing in actual operation to ensure the overall quality of the construction of building electrical engineering. Reasonably optimize the building electrical design In the actual construction of building electrical engineering, workers need to apply the sub-construction method to assess the electrical load status, formulate the corresponding intelligent optimization methods for it, sort out the construction ideas of intelligent reduction of emission, and improve the overall construction efficiency. For different electrical engineering and load characteristics, the application methods and value effects of electrical intelligent technology should be analyzed, to select the best design method and use the most advanced technical means to create more social and economic benefits for building electrical engineering. In addition, managers should pay attention to the management aspects of electrical design, ensure that the design schemes meet the standards required by relevant departments, properly select transformer facilities, and make sure that they use the changes in electrical load as soon as possible, to reduce unnecessary power loss. It should be noted that, because of the high sensitivity of electric motors, transformers and other facilities in the power system, they will be affected by the loss of current, which results in the increase of circuit operation power loss. For this problem, in the design process of the power distribution system, electric intelligent technology should be used to play the role of intelligent technology. In the optimization of electrical automation equipment, intelligent technology plays a considerable role, such as in the application of genetic algorithm and expert system of intelligent technology. The genetic algorithm, as an applied law to simulate biogenetics, can optimize the design and ensure the optimization of its gene arrangement through progressive search. The expert system needs to base on expert experience to solve the problems existing in the actual construction of building electrical engineering with the help of the inherent experience, in the form of machine learning samples. Workers need to ensure that the genetic algorithm and expert system are used reasonably in the construction of electrical engineering, and they can further optimize the design of electrical equipment to make sure that the building electrical engineering meets the relevant requirements and standards. that the use of genetic algorithms and expert systems in electrical engineering construction can further optimize the design of electrical equipment and ensure that building electrical engineering meets relevant requirements. Main points for attention Although intelligent technology has significant advantages in the construction of building electrical engineering, there are still many problems to be solved in actual construction. Workers need to understand their precautions and main points to effectively avoid the occurrence of adverse results due to excessive pursuit of the convenience of intelligent machine. At the economic level, since the current intelligent technology applied in the construction of building electrical engineering is relatively advanced, its technology should be controlled in terms of development costs, to effectively improve the overall efficiency of building electrical engineering and control costs reasonably. At the design level, workers must base on the actual situation in the design, do well in discovering the deficiencies and defects, and do not pursue the construction goals inconsistent with the actual situation, so as to really exert the advantages of intelligent technology, reduce the construction difficulty of electrical engineering, and control the operating cost. Conclusions In summary, by analyzing and exploring the application of intelligent technology in the construction of building electrical engineering, it could be found that intelligent technology occupies a core position in the construction of building electrical engineering in the new era of rapid development of information. The rational use of intelligent technical means is conducive to improving the accuracy of electrical engineering operation. In addition, it can further enhance the overall effect of electrical engineering and strengthen the security and reliability of electrical control. However, analysis and research has confirmed that the application of intelligent technology in the construction of building electrical engineering also has some limitations, which has an adverse effect on the effect of electrical engineering to some extent. Therefore, we should pay attention to the application of intelligent technology, learn the advanced technology concepts, and optimize the construction process of building electrical engineering. Only in this way can we organically combine intelligent technology with electrical engineering construction, promote the great development of building electrical engineering, build a harmonious and comfortable living environment for people, and realize the innovation of construction technology of building electrical engineering. Packaging world, 2018 (9)
v3-fos-license
2018-04-03T02:06:05.921Z
2014-12-01T00:00:00.000
17648850
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1004527&type=printable", "pdf_hash": "c0585949e0b781c65ad3f91c2865003d88a29172", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1322", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "c0585949e0b781c65ad3f91c2865003d88a29172", "year": 2014 }
pes2o/s2orc
Intraspecies Competition for Niches in the Distal Gut Dictate Transmission during Persistent Salmonella Infection In order to be transmitted, a pathogen must first successfully colonize and multiply within a host. Ecological principles can be applied to study host-pathogen interactions to predict transmission dynamics. Little is known about the population biology of Salmonella during persistent infection. To define Salmonella enterica serovar Typhimurium population structure in this context, 129SvJ mice were oral gavaged with a mixture of eight wild-type isogenic tagged Salmonella (WITS) strains. Distinct subpopulations arose within intestinal and systemic tissues after 35 days, and clonal expansion of the cecal and colonic subpopulation was responsible for increases in Salmonella fecal shedding. A co-infection system utilizing differentially marked isogenic strains was developed in which each mouse received one strain orally and the other systemically by intraperitoneal (IP) injection. Co-infections demonstrated that the intestinal subpopulation exerted intraspecies priority effects by excluding systemic S. Typhimurium from colonizing an extracellular niche within the cecum and colon. Importantly, the systemic strain was excluded from these distal gut sites and was not transmitted to naïve hosts. In addition, S. Typhimurium required hydrogenase, an enzyme that mediates acquisition of hydrogen from the gut microbiota, during the first week of infection to exert priority effects in the gut. Thus, early inhibitory priority effects are facilitated by the acquisition of nutrients, which allow S. Typhimurium to successfully compete for a nutritional niche in the distal gut. We also show that intraspecies colonization resistance is maintained by Salmonella Pathogenicity Islands SPI1 and SPI2 during persistent distal gut infection. Thus, important virulence effectors not only modulate interactions with host cells, but are crucial for Salmonella colonization of an extracellular intestinal niche and thereby also shape intraspecies dynamics. We conclude that priority effects and intraspecies competition for colonization niches in the distal gut control Salmonella population assembly and transmission. Introduction The Salmonella enterica serovars are important pathogens that cause disease ranging from a self-limiting gastroenteritis to persistent systemic infections. The human-adapted Salmonella enterica Typhi and Paratyphi serovars are the causative agents of typhoid fever, and penetrate the intestinal epithelium to disseminate to systemic tissues [1]. Approximately 1-6% of infected patients become chronic carriers and serve as the reservoir of disease, remaining asymptomatic while excreting Salmonella in their stool [1,2]. S. Typhimurium causes a typhoid-like disease in mice, but also infects a wide-range of mammalian hosts, including livestock [3,4]. S. Typhimurium is a major cause of foodborne diarrheal disease in humans, but can also cause invasive nontyphoidal Salmonella (NTS) disease in immunocompromised individuals [5,6]. NTS can persist in the gastrointestinal tract and be excreted in feces in certain patients [7], with elevated levels of NTS fecal shedding associated with antibiotic therapy [8]. Surprisingly little is known about Salmonella fecal shedding dynamics, particularly during persistent infection. However, this aspect of the Salmonella life cycle is fundamentally important for understanding transmission to new hosts. Transmission of this enteric pathogen occurs via the fecal-oral route. During invasive disease with host-adapted serovars, Salmonella invade the Peyer's patches (PP) in the small intestine and breach the epithelium. Trafficking through the blood and lymphatics results in systemic dissemination of the pathogen to the mesenteric lymph nodes (mLN), bone marrow, spleen, liver, and gallbladder [9]. It is thought that systemic Salmonella in gallbladder bile secretions reseed the small intestine to be transmitted in feces [1,10]. However, the fate of the initial invading Salmonella in the intestine and whether they contribute to fecal shedding has not been determined. A deeper understanding of the within-host population biology of Salmonella infections is crucial for determining treatment strategies and preventing spread. The mammalian host can be viewed as an ecosystem, with different tissues functioning as interconnected habitats. In this landscape, pathogens develop into population structures based on processes of dispersal, diversification, environmental selection, and coevolution within the host [11]. During host-to-host spread, each individual acts as an independent ecosystem, and a pathogen must adapt to a new environment in order to be successfully transmitted. Principles in ecology can thus be applied to explain and predict the resulting infection dynamics [11,12]. Since host-adapted Salmonella serovars first enter the gastrointestinal tract before spreading to systemic tissues, we hypothesized that distinct groups of communities would assemble within these two host compartments. In population ecology, this is referred to as a subpopulation, or a local group of individuals that interact within a certain habitat [13][14][15][16][17]. A metapopulation then consists of a collection of subpopulations with various interactions and rates of dispersal between their habitats. Indeed, studies utilizing tagged isogenic strains have revealed formation of metapopulations in other systemic infections. Due to differing replication rates and dispersal routes within host tissues, independent pathogen subpopulations form during Listeria monocytogenes, Yersinia pseudotuberculosis, and uropathogenic Escherichia coli infections [18][19][20][21], although the impact of these subpopulations on transmission is unknown. Wild-type isogenic tagged Salmonella (WITS) strains have been developed to resolve the early kinetics of acute infection in the susceptible C57BL/6 mouse background. In the streptomycintreated diarrhea model, WITS were applied to generate a mathematical model describing replication and immune clearance of Salmonella in the cecal lymph node 24 hours post-infection [22]. Analysis of an intravenous model of infection revealed that concomitant death and rapid bacterial replication resulted in the formation of independent WITS subpopulations in the liver and spleen, although hematogenous mixing led to the homogenization of these systemic communities after 48 hours [23]. A study of early dissemination determined that founder bacteria initiated infection independently in Peyer's patches and systemic compartments 4 days post-infection [24]. However, the WITS technique has not been utilized to dissect the spatiotemporal population dynamics during chronic infections. It is not known whether different subpopulations of Salmonella form during persistent infection, or how they contribute to the pool of Salmonella that is ultimately shed in the feces. Furthermore, it is important to determine whether Salmonella that are carried long-term in systemic tissues and/or in the gallbladder contribute to fecal shedding in the presence of a previously established intestinal subpopulation. The effect of an established intestinal subpopulation on subsequent super-infections is also unclear. However, this scenario could arise in endemic regions and outbreaks, and therefore has implications on human disease and livestock husbandry. It is also unclear whether humans can be co-infected with multiple Salmonella strains due to difficulties in obtaining consistent patient samples, but this scenario could arise in endemic regions and outbreaks. Studies in ecology have determined that immigration order dictates community structure through a priority effect, in which early colonization affords one member an advantage over future colonizers [25][26][27]. These competitive interactions are often mediated by resource availability [26][27][28]. Darwin's naturalization hypothesis posits that challenging species are more successful in habitats in which their close relatives are absent [29], as the more closely related they are, the more strongly they will compete for the same resources. Following this logic, we hypothesized that different subpopulations of Salmonella will compete for colonization of niches important for fecal shedding. In this study, we employed tagged isogenic S. Typhimurium strains in a mouse model of persistent systemic infection. We show that a Salmonella metapopulation structure forms during persistent infection, with distinct subpopulations in intestinal and systemic tissues. We further found that established subpopulations of intestinal Salmonella colonize crucial extracellular niches in the cecum and colon that are required for fecal shedding. Systemic Salmonella from the gallbladder, as well as challenging strains from other infected donor mice, are excluded from the distal gut niche in a novel observation of intraspecies colonization resistance by an enteropathogen. Salmonella hydrogenase, an enzyme that mediates acquisition of microbiota-derived hydrogen [30], is required to exert priority effects in this crucial transmission niche. In addition, we demonstrate that maintenance of this intraspecies colonization resistance is dependent on the Salmonella pathogenicity islands SPI-1 and SPI-2 during persistent infection. A tagged strain approach reveals formation of Salmonella subpopulations during persistent infection To define the Salmonella population structure that arises during chronic infection, we employed a previously established tagged strain approach using a mixture of barcoded, phenotypically equivalent S. Typhimurium strains [23]. These Salmonella wildtype isogenic tagged strains (WITS) each carry a unique 40 base pair tag in between the malX and malY pseudogenes, are equally fit, and have been applied to studies of acute infection [23]. Utilizing these previously published sequence tags, we constructed 8 WITS strains in the S. Typhimurium SL1344 background (W1-W8; Table S1) and confirmed each strain to be equally fit when grown in broth culture ( Figure S1A). 129X1/SvJ mice, which possess a wild-type Nramp1 allele and can be persistently colonized with S. Typhimurium [31][32][33], were orally inoculated with 10 8 colony forming units (CFU) of an equal mixture of strains W1-W8 (Figure S1B-C). Total WITS CFU were enumerated by plating ( Figure S1D) and qPCR was performed to determine the WITS abundances in systemic (spleen, liver, gallbladder) and intestinal Author Summary Salmonella enterica serovars infect various mammalian hosts, causing disease ranging from self-limiting diarrhea to persistent systemic infections such as typhoid fever. Here we investigated the impact of an established intestinal S. Typhimurium population on fecal shedding in the presence of another challenging strain. This scenario arises during host-to-host transmission, as well as during chronic host-adapted infections when systemic Salmonella reseed the intestinal tract to be transmitted in feces. In a mouse model of persistent Salmonella infection, we found that distinct subpopulations formed in intestinal and systemic tissues. Expansion of the intestinal subpopulation was responsible for increases in fecal shedding, rather than increased secretion of systemic Salmonella. Furthermore, the Salmonella that initially colonized the gut excluded challengers from the cecum, colon, and feces. A challenging systemic strain could only be shed upon ablation of the established intestinal strain. This intraspecies colonization resistance requires Salmonella hydrogenase-mediated invasion of the distal gut and is maintained by the virulence effectors SPI1 and SPI2. We describe novel observations indicating that Salmonella virulence effectors that have been shown to subvert the host immune response and microbiota, also play a role in intraspecies competition for colonization of transmission niches. (PP, small intestine, cecum, colon, feces) sites after 35 days of infection. Individual mice had WITS profiles that were distinct from other animals, with certain WITS comprising the majority of Salmonella found within infected tissues that varied on a mouse-bymouse basis ( Figure 1A). However, combined analysis of all infected mice revealed that all 8 WITS strains were represented in every tissue compartment ( Figure 1A) and there was no statistically significant difference between the relative abundances of the WITS strains in each of the tissues, indicating all 8 WITS are equally represented in vivo (Table S2, one-way ANOVA and Kruskal-Wallis tests). A control experiment in which 4 of the 8 WITS were underrepresented in the inoculum resulted in their subsequent underrepresentation within infected tissues ( Figure S2), indicating that these 4 WITS did not have any fitness advantage during infection. The WITS compositions in systemic and intestinal tissues were compared in order to determine whether Salmonella subpopulations arose after 35 days of persistent infection, a time after which the bacteria have breached the intestinal epithelium and have spread systemically to the liver and spleen. The strain composition within individual mice varied depending on the site of infection ( Figure 1A). In order to quantify potential differences in WITS abundances, we utilized a Bray-Curtis dissimilarity statistic, which has been commonly used in community abundance analyses in ecology and studies of the microbiota [34][35][36]. This calculation was applied to our model to obtain population-level distance values of WITS compositions in different sites. Bray-Curtis values were calculated between the WITS relative abundances of two tissues (see Materials and Methods), in which a score of 0 indicates an identical WITS profile in both organs and a score of 1 indicates completely dissimilar populations. A dissimilarity matrix was calculated for all tissue comparisons ( Table 1). The subpopulation in the liver closely matched that of the spleen with a low mean dissimilarity score of 0.248 ( Figure 1B, Table 1), which is consistent with these environments being highly connected by migration pathways through the bloodstream and/or lymphatics. In addition to colonizing systemic sites, Salmonella persisted within intestinal tissues for 35 days. However, in contrast to the spleen and liver, which contained an average of 3-4 WITS, the intestinal tissues were colonized by 1-2 strains ( Figure 1A). This suggested that while there was some bottlenecking in dissemination to systemic sites, stronger selection pressures likely existed within intestinal tissues. Further analysis of the WITS profiles indicated that the strain compositions in proximal gut tissues (PP and small intestine) were dissimilar from those present in distal gut tissues (cecum and colon, Figure 1A), with dissimilarity scores of 0.416-0.575 ( Figure 1B, Table 1, Figure S3A). In contrast, the WITS composition in the cecum and colon were very similar with a score of 0.101 ( Figure 1B, Table 1), which was significantly lower than the dissimilarity scores observed in the proximal gut ( Figure S3A). Together, these data suggest that during persistent infection, different subpopulations of Salmonella form between proximal and distal gut tissues. It is thought that Salmonella in the liver and gallbladder reseed the intestinal tract via bile, followed by subsequent shedding in the feces. If the bile ducts provided highly connected migration pathways between these sites, the WITS profiles should be similar between systemic and intestinal tissues. Although not all of the mice were colonized by Salmonella in the gallbladder ( Figure 1A), the WITS profiles in the gallbladder were most similar to the compositions of the spleen and liver from these mice (Table 1, Figure S3B). In contrast, the WITS compositions in the gallbladder were very different from the composition within the intestinal tissues ( Figure 1B, Table 1, Figure S3B). In addition, the WITS compositions in the distal gut were distinct from those in the systemic tissues with high dissimilarity scores .0.816 ( Figure 1B, Table 1). Collectively, analysis of the WITS compositions in various compartments within each infected mouse demonstrate that spatially delimited Salmonella subpopulations form during persistent infection, with systemic organs containing populations that are distinct from those in intestinal tissues. Increases in fecal shedding are attributed to clonal expansion of colonic Salmonella Since host-to-host transmission requires high levels of Salmonella shed in the feces [4,33], we wished to elucidate the kinetics and population dynamics of Salmonella shedding. Fecal samples were collected at various time points throughout the 35-day infection period (Figure 2A). An average of 6-7 WITS were present in feces after one day of infection, indicating some initial bottlenecking effects in the oral infection route may have occurred (Figure 2A-B). However, even greater dynamic changes in WITS compositions were observed at early time points in infection, with different strains shed at 7 and 14 days post-infection compared to day 1 ( Figure 2A). Importantly, there was a dramatic decrease in the number of strains detected in the feces to an average of 1-2 WITS, which did not change during the 35-day infection (Figure 2A-B). Importantly, the sharp decrease in the number of strains shed in the feces on day 7 correlated with an increase in total fecal Salmonella CFU ( Figure 2B), suggesting that clonal expansion of dominant WITS strains was responsible for increased fecal shedding. To ascertain the tissue compartment that served as the source of clonal Salmonella expansion, we compared the WITS relative abundance profiles of the feces to both systemic and intestinal tissues to identify similarities. Although Salmonella initially invade the PP, the WITS compositions in the PP compared to the feces were significantly different at 35 days post-infection (Figures 2C, Table 1). In addition, the compositions of the Salmonella populations within systemic sites compared to the population composition in the feces were even more dissimilar (Table 1). This further corroborated our earlier finding that distinct Salmonella subpopulations arose between systemic and intestinal compartments. Furthermore, we did not observe an increase in the number of WITS strains present during increased fecal shedding ( Figure 2B), which would be expected to occur if increased reseeding of systemic Salmonella was the source. Instead, these analyses revealed that the WITS profiles in both the cecum and colon very closely matched the composition of Salmonella shed in the feces ( Figure 2B; Table 1). Importantly, the dissimilarity values between the distal gut sites and the feces were significantly lower than that of any other tissue compartment analyzed ( Figure 2C, Table 1, Figure S3). Taken together, our results indicate that a clonal expansion of cecal and colonic Salmonella is responsible for the increases in fecal shedding. Systemic Salmonella are excluded from the distal gut and subsequent fecal shedding The results of our WITS experiment demonstrated that distinct subpopulations formed in systemic and intestinal tissues by 35 days post-infection ( Figure 1). However, even though high Bray-Curtis scores were computed between systemic and distal gut tissues, values were ,1 indicating there were small percentages of shared WITS in these sites. One limitation of our mixed inoculum approach was that we could not discern the directionality of dissemination. For example, it could not be determined whether WITS present in the distal gut were part of the initial population or if they arrived secondarily by seeding the intestinal tract from systemic sites. In order to determine the relative contribution of systemic and intestinal strains to fecal shedding, it required a strategy to mark Salmonella in these different sites within the host. To address this, we developed a co-infection model that employed isogenic marked strains rapidly identifiable by differential plating on antibiotics. We used the parental streptomycin-resistant (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19). Systemic tissues are highlighted in gray. Inset: composition of the WITS inoculum. B) Bray-Curtis dissimilarity values of WITS relative abundances in organ A versus organ B. Each circle represents an individual mouse (n = 19), lines represent medians. The lowest median dissimilarity in the depicted comparisons was observed between the colon and cecum (black circles). Intergroup differences were evaluated by paired t-tests. ** p = 0.0024, *** p = 0.0002, **** p,0.0001. doi:10.1371/journal.ppat.1004527.g001 SL1344 strain that has a missense mutation in hisG, which is not required for virulence, and an isogenic SL1344-kan R strain containing a kanamycin resistance cassette inserted at this site (hisG::aphT). These strains are equally fit in single and in mixed infections in mice inoculated by oral or IP routes [33]. In our coinfections, each mouse received 10 8 of one strain by oral inoculation and 10 3 of the isogenic strain by intraperitoneal (IP) injection. The IP route bypasses the gastrointestinal tract, such that Salmonella colonize systemic tissues first [32]. To confirm that successful reseeding occurs in our model, Salmonella shedding and tissue burdens were compared in control mice that received single IP infections or those that received single oral infections. Systemic IPdelivered Salmonella reseeded the small intestine, where they reached the same range of fecal shedding levels by 14 days postinfection as mice infected orally ( Figure S4). However, the oral inoculation route resulted in .1,000-fold more Salmonella fecal CFUs 1 day post-infection compared to the IP route, and reached peak fecal shedding levels more rapidly ( Figure S4A). Thus, in the co-infection model, the oral strain establishes an infection in the gut before the systemic strain reaches the intestine, allowing us to test the strength of priority effects in Salmonella population assembly. Mice injected IP with a single Salmonella strain shed this strain in the feces as soon as 1 day post-infection ( Figure S4A). This was in contrast to what occurred in mice that had been co-infected orally with an isogenic WT strain ( Figure 3). The systemic strain was detected in the feces of only 5 of the 54 mice throughout the 30 days of infection ( Figure 3A-C). Importantly, shedding of the systemic strain only occurred on a single day and did not persist. Since mice shed variable levels of Salmonella [33,37], we wondered whether this would influence the ability of the IP strain to be shed. Surprisingly, the oral strain was exclusively shed in the feces of low (,10 4 CFU/gram), moderate (,10 8 CFU/gram), and super ($10 8 CFU/gram) shedder mice ( Figure 3A-C). In addition, when the reciprocal combination of strains (oral: SL1344-kan R , IP: SL1344) was used the same result was obtained throughout 60 days of infection ( Figure 3, Figure S5A). Taken together, these results indicate that the established intestinal strain prevents colonization of the cecum and colon by Salmonella disseminating from systemic tissues. We next wondered what the composition of the Salmonella strains were within systemic tissues of mice that had been coinfected for 30 days. In contrast to the cecum and colon, the IP and oral strains were both present within systemic tissues after 30 days of co-infection. The spleen and liver were comprised of similar abundances of both strains (Figure 4), indicating that intestinal Salmonella effectively disseminated to systemic sites. Although the orally inoculated strain was present in the gallbladder, the IP strain comprised .80.44% of the total Salmonella population in this organ ( Figure 4). In addition, the IP strain was present as a minority of the population present in the PP (19%), small intestine (30%), and mLN (38%) (Figure 4). The IP strain was not detected in the cecum and colon in 25 out of 28 mice, and comprised ,8% in the remaining animals ( Figure 4). Strikingly, the oral strain remained dominant in the cecum and feces during the 60-day infection ( Figure S5). Thus, our results from the co-infection model and the WITS analyses suggest that Salmonella that are established in the cecum and colon prevent systemic subpopulations from colonizing important niches that are required for fecal shedding. Increased Salmonella levels in the gallbladder does not lead to fecal shedding of systemic bacteria One possible explanation for the dominance of the oral strain in the distal gut and feces could be that there is insufficient reseeding of systemic Salmonella into the gastrointestinal tract. To test this Table 1. Bray-Curtis dissimilarity values of WITS relative abundances within murine tissues during persistent infection. possibility, we utilized an established gallstone model of infection, in which S. Typhimurium biofilm formation on gallstones increased reseeding and subsequent fecal shedding by 1,000-fold [38]. We fed mice a lithogenic diet for 10 weeks to induce gallstone formation that resulted in 1-9 stones/mouse as confirmed by ultrasound imaging ( Figure S6). In contrast, mice on a standard diet never developed gallstones ( Figure S6). As previously demonstrated, mice with gallstones that were infected with 10 3 S. Typhimurium by IP injection shed .1,000-fold higher levels of bacteria 7 days postinfection compared to control mice ( Figure S7). To determine whether increased levels of S. Typhimurium in the gallbladder would allow systemic bacteria to colonize the cecum Figure 1. B) Left y-axis: number of WITS strains present at the specified time points post-infection (black line: mean, SD). Right y-axis: total Salmonella CFU enumerated from fresh fecal pellets collected throughout infection (gray columns: mean, SD). C) Bray-Curtis dissimilarity values between WITS relative abundances in organ A and organ B (feces). Each circle represents an individual mouse (n = 19), lines represent medians. Intergroup differences were evaluated by paired t-tests. ns = not significant, ** p = 0.0041, *** p = 0.0010, **** p,0.0001. doi:10.1371/journal.ppat.1004527.g002 and/or colon, mice with diet-induced gallstones were co-infected orally with SL1344 and IP with SL1344-kan R . By 14 days postinfection, mice with gallstones had a mean Salmonella gallbladder burden .10,000-fold higher than mice without gallstones (Figure 5A). This represented an increase in systemic Salmonella, as the gallbladders were exclusively colonized by the IP strain ( Figure 5B). In addition, mice with diet-induced gallstones had significantly higher levels of S. Typhimurium in the small intestine, indicating that increased numbers of systemic bacteria had reseeded this site ( Figure 5B). Despite this drastic increase in the levels of systemic Salmonella reseeding the intestine, the established intestinal strain remained dominant in the cecum, colon, and feces ( Figure 5C). Taken together, our data suggest that in the presence of an established Salmonella strain, systemic Salmonella are excluded from colonizing crucial transmission niches in the distal gut. The established intestinal strain is resistant to super-colonization regardless of infection route Although the presence of gallstones increased the numbers of S. Typhimurium in the gallbladder to 10 3 -10 7 CFU/organ as well as subsequent reseeding of the small intestine, it is possible that these levels were insufficient to compete with the established intestinal subpopulations (Tables S3, S4). Indeed, we have measured the levels of Salmonella in gastrointestinal sites and found that there is a range of 10 1 -10 8 total CFU (Table S3). To address this issue, we performed sequential infections in which resident intestinal Salmonella were challenged with a high oral dose of a second strain. First, mice were inoculated with 10 3 SL1344 by IP injection to establish a systemic infection. This initial Salmonella strain was detected in the feces after 5-7 days and was persistently shed for 35 days ( Figure 6A). These mice were then super-infected with 10 8 SL1344-kan R orally. Although, the orally inoculated strain was detected in the feces 1 day post-infection (dpi), it was not detected in the feces for the remaining 7 days post-oral inoculation (35-42 dpi, Figure 6A). The challenging oral strain was not detected in any systemic or intestinal tissues by 7 days post-challenge (42 dpi, Figure S8A). This demonstrates that super-infecting strains are excluded from colonizing the intestine in the presence of a resident, persistent intestinal Salmonella infection, regardless of the route of inoculation. Collectively, our results suggest that there is intraspecies competition for a transmission niche in the distal gut. Differentially marked, isogenic Salmonella strains were used in co-infections. Mice were given 10 8 SL1344 by drinking and injected intraperitoneally (IP) with 10 3 SL1344-kan R immediately afterwards. The reciprocal combination of strains was also tested and included. A-C) Salmonella CFU/gram feces were determined from fecal pellets collected over 30 days of infection. SL1344-kan R was identified by differential plating or patching onto LB agar containing 40 mg/ml kanamycin. Oral strain CFU (black) are plotted on the top half of the graph and the IP strain CFU (gray) on the bottom. Limit of detection for a single fecal sample is 10 CFU/gram feces. Each plot represents Salmonella fecal CFU (median, range) for mice shedding at A) low (n = 8), B) moderate (n = 34), and C) super shedder (n = 12) levels. D) Oral (black) and IP (gray) strain composition of Salmonella shed in feces after 30 days of co-infection (mean, SD). Data represent four independent experiments (n = 54). **** p,0.0001, Wilcoxon matched-pairs signed rank test. doi:10.1371/journal.ppat.1004527.g003 Priority effects govern gut colonization and fecal shedding of Salmonella Based on our evidence of intraspecies competition for a distal gut niche, we proposed that the dominance of established Salmonella in the cecum and colon is attributed to priority effects that govern distal gut colonization and subsequent fecal shedding. To test this notion, we performed sequential S. Typhimurium infections to evaluate the duration and strength of these competitive interactions. Mice were infected with 10 8 SL1344 orally, and fecal shedding of Salmonella was monitored. All of the mice continued to shed Salmonella over the 102 days of infection ( Figure 6B). After 102 days, the mice were inoculated orally with 10 8 CFU of a second competing strain, SL1344-kan R . The competing strain was detected in the feces during the first 3 days post-infection ( Figure 6B). However, by 14 days post-challenge (116 dpi), 28 of the 45 mice were no longer shedding the competing Salmonella strain ( Figure 6B). Finally, by 35 days postchallenge (137 dpi), the competing strain was not detected in the feces ( Figure 6B), intestinal compartments, or systemic tissues of co-infected mice ( Figure S8B). The reciprocal strain combinations were also tested: mice were first infected with 10 8 SL1344-kan R orally for 60 days before subsequent challenge with 10 8 SL1344, in which the competing strain was cleared from the feces by 20 days post-challenge ( Figure S9A). Thus, this colonization resistance against the same Salmonella species was maintained during the chronic stages of infection. We next sought to determine whether the levels of the initial oral strain (SL1344) in the colon and in the feces would influence the clearance kinetics of the second competing oral strain (SL1344kan R ). One day after the second oral inoculation, the percentage of the competing strain varied depending on the level of shedding of the resident strain. For example, in mice that were shedding . . Systemic Salmonella are excluded from the distal gut during competition with an established gut strain. Mice were coinfected with SL1344 orally and SL1344-kan R IP. Animals were sacrificed after 30 days of infection. Specified organs were collected and plated for determination of oral (black) and IP (gray) strain abundance. The reciprocal combination of strains was also tested and included. Data represent three independent experiments (n = 28). Percent strain composition of Salmonella in tissues. (mean, SD). ns = non-significant, *p = 0.0159, ** p = 0.0087, ***p = 0.0002, **** p,0.0001, Wilcoxon matched-pairs signed rank tests. doi:10.1371/journal.ppat.1004527.g004 10 8 CFU/g feces (super shedder mice), the competing SL1344kan R strain comprised 4.88% of the total population on the first day post-secondary inoculation ( Figure S9B). In contrast, for low and moderate shedder mice, the competing strain comprised 42.02% and 35.58% of the total population, respectively ( Figure S9B). These differences remained significant 5 days after infection with the second competing SL1344-kan R strain. However, by days 10 and 14, the second SL1344-kan R strain was no longer detected in the feces of any of the mice ( Figure S9B). These results indicate that more robust and rapid priority effects are exhibited in mice that are colonized with higher colonic Salmonella loads. Finally, to determine whether we would see the same intraspecies priority effects in the distal gut during host-to-host transmission, we utilized our previously established model of transmission from an infected, super shedder mouse to uninfected mice in the same cage [33]. In this experiment, the donor mouse was orally infected with SL1344-kan R and was shedding . 10 8 CFU/g at 14 days post-inoculation ( Figure 6C). As a positive control for host-to-host transmission, the donor mouse was cohoused with uninfected mice. Similar to our previous results, naïve mice began shedding SL1344-kan R within 24 hours and continued to shed even after the donor was removed ( Figure S10A). In Figure 5. Gallstone-induced increases in systemic reseeding Salmonella are insufficient to displace the established population in the cecum and colon. Mice on an 11-week lithogenic diet developed cholesterol gallstones (green, n = 13), while mice on the normal base diet were devoid of gallstones (white, n = 10). All mice were co-infected with 10 8 SL1344 orally and 10 3 SL1344-kan R by IP injection. Feces and specified tissues were collected 14 days post-infection to calculate Salmonella burden and strain abundance. A) Total Salmonella CFU per gram gallbladder tissue (mean, SD), B) percent abundance of the IP strain in the gallbladder and small intestine, and C) percent abundance of the oral strain in the cecum, colon, and feces were determined in control mice and mice with gallstones. Each circle represents an individual mouse, lines represent means. Data are representative of two independent experiments. ns = non-significant, *p = 0.0224, ***p = 0.0004, ****p,0.0001, unpaired Mann-Whitney tests. doi:10.1371/journal.ppat.1004527.g005 Figure 6. Established strains are resistant to super-colonization regardless of infection route. Sequential infections in mice were performed using SL1344 as the initial strain (Strain 1, black) and SL1344-kan R as the competing strain (Strain 2, gray). Individual mice were monitored for fecal Salmonella shedding at the indicated time points. Pellets were plated to determine total Salmonella CFU and to discern strain abundances. Limit of detection for a single fecal sample is 10 CFU. A) Mice were first infected with 10 3 SL1344 by IP injection. After 35 days, mice were challenged with 10 8 SL1344-kan R by drinking. Left: Fecal CFU (mean, SD) of established IP and competing (oral) strains. Right: Geometric mean of strain CFU in feces. Data are representative of two separate experiments (n = 10). B) Mice were first inoculated with 10 8 SL1344 (black) by drinking, which established a persistent infection for 102 days. Animals were subsequently challenged with 10 8 SL1344-kan R (gray) by drinking. Left: Fecal CFU (mean, contrast, recipient mice that had been infected for 14 days with SL1344 required 10 days of cohousing before low levels of the donor strain (,0.02% of all Salmonella) were detected in the feces ( Figure 6C). In addition, shedding of the donor strain in the previously infected recipient mice was transient, and was not detected in the feces or tissues 10 days post-cohousing ( Figure 6C, Figure S8C). Similar results were obtained when a SL1344 super shedder was cohoused with SL1344-kan R infected mice. Cohousing for 12 days was required before the donor strain could be detected in the feces of recipient SL1344-kan R mice ( Figure S10B). The super shedder donor was left in the cage for an additional 6 days before removal, but consistent with previous findings, the donor strain was not detected in the feces of recipient mice by day 23 post-cohousing ( Figure S10B). Together, these experiments show that priority effects determine Salmonella population assembly in intestinal transmission niches, where established subpopulations exert colonization resistance against incoming challengers. Ablation of the established intestinal subpopulation permits distal gut colonization and fecal shedding of systemic Salmonella Since the established subpopulation of Salmonella in the cecum and colon exerts colonization resistance, we proposed that their removal would allow challengers to occupy vital transmission niches. To test this idea, mice co-infected with 10 8 SL1344 orally and 10 3 SL1344-kan R IP for 7 days were then treated with a single dose of kanamycin. Kanamycin is not absorbed systemically and thus was used to ablate the extracellular, kanamycin-sensitive bacteria in the gastrointestinal tract. Within 24 hours of antibiotic administration, fecal shedding of the established intestinal SL1344 decreased by ,5 logs ( Figure 7A, left). Concomitant with the decrease in the established strain, over 10 7 CFU of the systemic SL1344-kan R strain was shed per gram of feces ( Figure 7A, left). By 4 days post-antibiotic treatment, the systemic strain was exclusively shed in the feces ( Figure 7A, left) and was transmitted to naïve recipients ( Figure 7A, right). Thus, priority effects arose during the first 7 days of infection, which coincided with the clonal expansion in the distal gut and feces observed in the WITS studies ( Figure 2). Based on these findings, we hypothesized that Salmonella strains were competing for limited nutrient or spatial resources within the cecum and colon, which inhibited the ability of systemic strains to colonize the distal gut. We tested this notion by gavaging co-infected mice (SL1344 oral, SL1344-kan R IP) with 5 mg streptomycin in order to disrupt the microbiota and make more of these resources available [37,[39][40][41]. Both Salmonella strains are streptomycinresistant, and previous studies have shown that streptomycin treatment of infected mice increases Salmonella fecal shedding to super shedder levels [33,37]. We observed that all streptomycintreated mice became super shedders, yet the increase in fecal Salmonella CFU reflected expansion of the oral SL1344 strain ( Figure S11). This indicated that disrupting the microbiota with streptomycin treatment was insufficient to permit shedding of the systemic strain, as the newly available resources were likely immediately utilized by established intestinal Salmonella. Furthermore, since kanamycin does not enter mammalian cells, these results collectively indicate that established intestinal Salmonella occupy an extracellular transmission niche in the distal gut and exclude the bacteria that are reseeding the intestine from systemic sites. Early inhibitory priority effects in the gastrointestinal tract are facilitated by intraspecies competition for a nutritional niche Our data show that intraspecies priority effects govern Salmonella population assembly in the distal gut. Since our previous results demonstrated that clonal expansion and priority effects in the cecum and colon could occur by 7 days post-infection ( Figure 2, 7A), we hypothesized that nutrient acquisition was very important during this stage of colonization. Indeed, ecological theory has implicated competition for nutrients as an important determinant in priority effects and community structure [28]. S. Typhimurium hydrogenase (hyb) is a key mediator of cecal ecosystem invasion and is required to consume a microbiotaderived metabolite [30]. In the un-inflamed gut of conventional mice with complex microbiota, hydrogenase enzymes facilitate consumption of hydrogen (H 2 ) intermediates in a SPI1-and SPI2independent manner [30]. Similarly, we show here that Hyb is important for gut colonization and fecal shedding in 129SvJ mice with an intact conventional microbiota ( Figure S12). To test the role of Hyb in intraspecies priority effects, co-infections were performed in which all mice were injected IP with 10 3 wild-type (WT) SL1344 bacteria and one group of mice was co-inoculated orally with 10 8 DhybDSPI1DSPI2 isogenic mutant S. Typhimurium while control mice were co-inoculated orally with 10 8 WT SL1344-kan R . The hyb mutation was constructed in a DSPI1D-SPI2 background to assess the need for hydrogenase in the context of a non-inflamed gut. The relative levels of each strain in the feces were monitored over 15 days of co-infection ( Figure 7B-C). Importantly, the Salmonella shed in the feces at 4 and 7 days contained systemic WT bacteria and by 15 days post-infection were entirely comprised of the systemic WT strain in mice that received DhybDSPI1DSPI2 orally ( Figure 7B). Total levels of fecal Salmonella were significantly lower in the DhybDSPI1DSPI2 coinfection group compared to controls ( Figure S13A), which corresponds to the decreased fecal shedding of DhybDSPI1DSPI2 mutants during single oral infections ( Figure S12B). The strain compositions in the feces of these mice throughout infection indicated that the increase in total fecal CFU on day 15 reflected rapid reseeding and shedding of the systemic WT strain concomitant with declining levels of the oral DhybDSPI1DSPI2 strain ( Figure 7C). Taken together, we have demonstrated that the hydrogenase mutant was unable to effectively invade the cecal and colonic niche ( Figure 7D), thereby nullifying any priority effects and allowing systemic Salmonella to colonize the distal gut with subsequent transmission in feces ( Figure 7C-D). S. Typhimurium SPI1 and SPI2 are required to maintain intraspecies colonization resistance during persistent infection To determine whether intraspecies colonization resistance was still dependent on the maintenance of the extracellular intestinal SD) of established strain SL1344 and challenge strain SL1344-kan R in feces. Right: Geometric mean of strain CFU in feces. Data are representative of two separate experiments (n = 45). C) Mice were first infected orally with 10 8 of either SL1344 or SL1344-kan R 14 days prior to cohousing. Left: schematic of cohousing experiment; a SL1344-kan R super shedder donor (gray) was cohoused with SL1344-infected recipient mice (black) and a naïve mouse (white) as a control. Right: total Salmonella CFU/gram feces in cohoused mice. Gray asterisks indicate shedding levels of the SL1344-kan R super shedder donor, which was removed after 10 days of co-housing. Lines depict the geometric mean Salmonella CFU/gram feces of recipient mice shedding the established SL1344 strain (black) or challenging donor SL1344-kan R strain. Data are representative of two independent experiments (n = 2 donors, 8 recipients). exclusively shed in the feces 2 days post-antibiotic treatment ( Figure 8A, left), and comprised the entire Salmonella population in the cecum and colon after 7 days ( Figure 8A, right). These data thus indicate that the extracellular niche in the cecum and colon is required to maintain intraspecies colonization resistance during persistent infection, which actively inhibits successful fecal shedding of systemic Salmonella. To gain more insight into how S. Typhimurium competitively excludes incoming challengers from colonizing the distal gut niche, we tested the potential role of the key virulence factors Salmonella Pathogenicity Islands SPI1 and SPI2, which encode type III secretion systems that deliver effector proteins required for persistence in host tissues [32,[42][43][44] and fecal transmission [33]. Co-infections were performed in which mice simultaneously received 10 3 WT SL1344 by IP and 10 8 isogenic DSPI1DSPI2 mutant bacteria orally. In the DSPI1DSPI2 co-infected mice, the IP-injected WT bacteria were not present in significant numbers at day 7 ( Figure 8B). However, by day 25, 21.43% of all fecal Salmonella were WT bacteria, and by day 70, 98.86% were WT S. Typhimurium ( Figure 8B). In addition, the total fecal Salmonella CFU in the control (SL1344 oral, SL1344-kan R IP) and the DSPI1DSPI2 co-infected mice were similar ( Figure S13B), which is consistent with our result that the systemic WT strain reseeded and replicated within the intestinal tract once the DSPI1DSPI2 mutant was cleared ( Figure 8C). Indeed, examination of strain abundances in intestinal tissues after 70 days of co-infection confirmed that the systemic IP strain had predominantly colonized the mLN, small intestine, cecum, and colon while the initial DSPI1DSPI2 mutant was cleared from these sites ( Figure 8D). These studies demonstrate that SPI1 and SPI2 are required for the established intestinal Salmonella population to maintain active colonization resistance against systemic reseeding bacteria. Discussion Microbial fecal shedding by chronically infected hosts is the major source of new infection and disease for many enteropathogenic microbes. However, very little is known about the dynamics of Salmonella subpopulations within mammalian hosts and what their relative contributions are to host-to-host transmission. Community assembly theory provides a framework for understanding infection processes, and in this study, we defined the S. Typhimurium metapopulation structure that arose during persistent infection. We then applied ecological principles that govern community assembly to determine the contribution of different Salmonella subpopulations to fecal shedding. Our tagged strain approach revealed that distinct S. Typhimurium subpopulations arose within different host tissues, resulting in a metapopulation structure with variable migration between sites. After 35 days of infection, the WITS compositions between the liver and spleen closely matched each other, suggesting that robust migration pathways in the blood and lymphatics exist between these tissues. Previous studies of acute infection in susceptible C57BL/6 mice determined that hematogenous spread 48 hours post-infection resulted in S. Typhimurium mixing between the spleen and liver [23]. Expanding our WITS analyses to include a more comprehensive set of infected tissues, we determined that Salmonella in systemic sites were distinct from subpopulations in the intestinal tract. Interestingly, we found that the WITS profiles in the PP and small intestine were also dissimilar from those in the cecum and colon. This likely represents stochastic invasion of the PP by a subset of individual WITS strains, while different subsets initiate separate infection foci in other tissues. Indeed, a recent study of early infection dynamics determined that PP invasion fueled spread to the mLN while an independent pool of bacteria initiated splenic and hepatic infection [24]. Our work suggests that these initial colonization dynamics shape the metapopulation structure that arises and is maintained throughout persistent infection. Quantifying the differences in systemic, proximal and distal gut sites with Bray-Curtis dissimilarity scores, we were able to gain new insights into the importance of the distal gut as a transmission niche. Surprisingly, we have shown that systemic Salmonella can only colonize the distal gut upon clearance of the established intestinal subpopulation with an oral kanamycin treatment. In contrast, treatment with streptomycin, to which the SL1344 strain is resistant, was insufficient to permit shedding of the systemic strain. This suggests that disrupting microbiota-mediated colonization resistance does not create new niches for systemic bacteria to colonize. Previous studies found that administration of ciprofloxacin killed extracellular Salmonella and permitted tolerant bacteria within dendritic cells of the cecal lymph node to colonize the cecum [45,46]. Although this fluoroquinolone treatment also ablated systemic Salmonella [45], these studies all highlight the intensely competitive dynamics between Salmonella within the distal gut. Competition for gut colonization was also reported with E. coli K12 strains in germ-free mice, although differences in colonization ability was due to varying fitness costs of antibiotic resistances [47]. Our study with isogenic strains support the idea that intraspecies competition for nutrients excludes systemic bacteria from colonizing the distal gut, in which established Salmonella has saturated a required niche. Intraspecies priority effects have recently been described for commensal species of Bacteroides [48] and E. coli [49], but our findings with an enteropathogen that causes persistent systemic infection is novel. There may also be evidence of these competitive interactions during Yersinia enterocolitica microcolony formation within intestinal tissues, in which previously infected PP were less likely to be super-infected [21]. However, it remains unknown whether colonization can proceed if established Yersinia are eliminated. It is possible that this may be unique to Salmonella rather than a broad enteropathogen phenomenon, as this colonization resistance was not seen between isogenic Campylobacter jejuni strains in a transmission study involving chickens [50]. Host-adapted Salmonella serovars infect the gastrointestinal tract before disseminating to systemic sites such as the gallbladder, which has been classically thought to be the source of Salmonella transmitted in feces [51]. However, the contribution of systemic reseeding in the presence of an established Salmonella intestinal tract infection had never been investigated. We show that an established intestinal strain persisted in the cecum and colon, even when gallstone formation increased gallbladder levels of S. Typhimurium .10,000-fold. It is interesting to speculate that intraspecies colonization resistance may occur in other hosts that are persistently infected by Salmonella. For example, humans can carry S. Typhi for long periods of time possibly in the gallbladder [52]. Although gallbladder removal sometimes cures patients, over 20% of carriers continued to shed S. Typhi and S. Paratyphi in their stool [53,54], which indicates an alternative persistent reservoir. While circulating S. Typhi in Kathmandu are resistant to nalidixic acid and several fluoroquinolones, patient gallbladder isolates are more sensitive to nalidixic acid, gatifloxacin, and ofloxacin, indicative of a limited role in typhoid transmission [55]. The relative contributions of systemic versus intestinal populations of S. Typhi to transmission are not known. Perhaps the presence of fecal ''showers'' of S. Typhi [4] are due to reseeding bacteria from systemic sites that gain access to spatial and nutritional resources in the gut. In our co-infection model, the oral strain comprised 88.83% of fecal Salmonella after 60 days, which was lower than the 97.96% observed after 30 days (p = 0.06, unpaired Mann-Whitney). Though this was not a significant difference, it is possible that the intestinal strain may lose its dominance at even later time points, at which point systemic Salmonella may reseed from mesenteric lymph node macrophages [31,56,57] and/or the gallbladder [1,2,10]. Intraspecies Salmonella colonization resistance could be shaping typhoid epidemiology in endemic regions, but future work is required to determine whether this occurs in other Salmonella serovars besides Typhimurium. We have found that the clonal expansion of the intestinal subpopulation is responsible for increases in S. Typhimurium fecal shedding. The mechanisms by which this subpopulation expands and establishes intraspecies colonization resistance are likely multifactorial. S. Typhimurium fimbriae and adhesins are important for attachment to intestinal tissues [58][59][60] and may play a role in this intraspecies dynamic. Host immune responses contribute to Salmonella clearance [61][62][63], and could also be involved in influencing intraspecies colonization resistance. However, intraspecies colonization resistance was observed at 14 days post-infection and lasted over 102 days in the context of co-infections, cohousing experiments, and sequential infections. This suggests that neither the innate nor the adaptive immune responses alone could be responsible for the exclusion of systemic reseeding Salmonella. Microbial communities undergo local diversification in different habitats within the host [12,[64][65][66][67][68], and we considered the possibility that genetic mutations could be responsible for Salmonella expansion and intraspecies colonization resistance. Previous studies with marked isogenic strains determined that spontaneous mutations alone do not shape S. Typhimurium colonization dynamics or fecal transmission during persistent infection in 129Sv mice. The dominance of a re-isolated strain was lost upon subsequent infection or passage in broth, and exhibited the same infectious dose (ID 50 ) as a culture-grown strain [24,33]. A study of systemic S. Typhimurium infection revealed that enhanced growth of bacteria were not due to the selection of mutants, but rather were transient phenotypic changes dependent on gene regulation [69]. Systemic Salmonella did not accumulate attenuating mutations during our experiments. This subpopulation adapted to the intestinal environment following ablation of the resident strain, and replicated to supershedder levels with rapid transmission to naïve mice. Salmonella transcriptional responses likely play an important role in expansion in the distal gut, and insight into these changes will elucidate other mechanisms by which priority effects are exerted. Our studies with a hydrogenase mutant revealed that Salmonella competition for a microbiota-derived nutrient is one mechanism by which a challenging systemic strain is excluded from the distal gut transmission niche. According to the monopolization hypothesis, rapid population growth upon colonization of a new habitat results in the effective monopolization of resources, resulting in a strong inhibitory priority effect [70]. Since Salmonella are mainly localized in extracellular regions of the distal gut [33], it is tempting to speculate that other Salmonella factors required for nutrient acquisition play a role in intraspecies colonization resistance. The importance of nutrient acquisition in establishing priority effects could be applied to the development of novel therapies, in which targeting key metabolic pathways could potentially prevent pathogen colonization and transmission. We have found that SPI1 and SPI2 contribute to intraspecies colonization resistance up to 70 days post-infection. Importantly, co-infected mice that received 10 8 DSPI1DSPI2 orally shed significant levels of WT systemic Salmonella beginning 25 days post-infection, with no significant changes in the total fecal shedding of Salmonella. This suggests that as soon as nutrient and/or spatial resources are made available by the clearance of the initial DSPI1DSPI2 mutant, WT Salmonella spread from systemic tissues and rapidly expand within the intestinal tract. The T3SS encoded by these Salmonella pathogenicity islands deliver over thirty effectors with diverse functions [42][43][44]71]. These effectors could act on Salmonella directly, or create an environment that kills strains reseeding from systemic tissues. These mechanisms could involve Salmonella-induced inflammation and modulation of the host immune response [72]. Inflammation also disrupts the host microbiota and allows the pathogen to metabolize newly available nutrients 1 [73][74][75][76]. Future work will seek to determine which of these are involved in establishing priority effects and exerting intraspecies colonization resistance. Priority effects have long been known to shape community assembly in a variety of ecological systems, ranging from bacteria to larger eukaryotic organisms [27,[64][65][66], but this is the first time the phenomenon has been described for pathogen subpopulations during persistent infection within a host. In this landscape, the order in which S. Typhimurium arrive to the intestinal ecosystem dictates which bacteria are subsequently shed in the feces. The results presented herein demonstrate that colonization of distal gut tissues is a bottleneck for successful transmission, which subpopulations of Salmonella compete for. These studies may inform disease processes in host-adapted Salmonella serovars that cause invasive disease, yet are still transmitted fecal-orally. S. Typhimurium is a generalist pathogen that also infects livestock and humans, and thus our work has direct implications on public health [3,4]. Our findings also highlight the potential for the application of ecological principles to epidemiology in order to predict dominant circulating strains during outbreaks. This work also sheds light on potential mechanisms that influence human-to-human transmission of non-typhoidal diarrheal infections, which can also be invasive in certain patients [5,6]. A better understanding of these mechanisms might reveal novel therapeutic approaches, or even preventive measures in thwarting disease spread. Ethics statement Experiments involving animals were performed in accordance with NIH guidelines, the Animal Welfare Act, and US federal law. Mouse strains and husbandry 129X1/SvJ and 129S1/SvImJ mice were obtained from Jackson Laboratories (Bar Harbor, ME). Male and female mice (5-7 weeks old) were housed under specific pathogen-free conditions in filter-top cages that were changed weekly by veterinary personnel. Sterile water and food were provided ad libitum. Mice were given 1 week to acclimate to the Stanford Research Animal Facility prior to experimentation. Bacterial strains and growth conditions The S. Typhimurium strains used in this study were derived from the streptomycin-resistant parental strain SL1344 [77]. A missense mutation (hisG46) in SL1344 results in histidine auxotrophy [78]. The isogenic SL1344-kan R strain was created by replacing the hisG coding sequence with that of a kanamycinresistance casette (hisG::aphT) using the methods of Datsenko and Wanner [33,79]. Genetic manipulations were originally made in the S. Typhimurium LT2 background before being transferred to SL1344 by P22 transduction. This methodology was also used to construct wild-type isogenic tagged Salmonella (WITS strains: W1-W8), in which a unique 40-bp signature tag and the kanamycin-resistance cassette were inserted between the malX and malY pseudogenes. Grant et. al. previously established this approach and published the unique 40-bg sequence tags of 8 WITS strains [23], which were employed in this study (Table S1). Growth curves of W1-W8 in LB broth cultures were performed by optical density readings and plating for colony forming units (CFU) per milliliter ( Figure S1A). DSPI1DSPI2 (orgA::tet, ssaV::kan) was generated previously for use in other studies [80]. The Dhyb (hypOhybABC::cm) deletion was constructed as described by Maier et. al., with P22 phage transduction to insert the deleted genomic region into the DSPI-1DSPI-2 strain ( [30], Table S1). All constructs were verified by PCR. All S. Typhimurium strains were grown at 37uC with aeration in Luria-Bertani (LB) medium containing the appropriate antibiotics: streptomycin (200 mg/ml), kanamycin (40 mg/ml), tetracycline (15 mg/ml) and chloramphenicol (8 mg/ml). For mouse inoculation, an overnight culture of bacteria was spun down and washed with phosphate-buffered saline (PBS) before resuspension to obtain the desired concentration. Mouse infections Food was removed 16 hours prior to all mouse infections. In WITS experiments, mice were inoculated with an equal mixture of strains W1-W8 via oral gavage of 10 8 CFU in 100 ml PBS. For intraperitoneal (IP) infections, mice were injected with 10 3 CFU in 100 ml PBS as previously described [32]. In the co-infection model, mice drank an oral dose of 10 8 SL1344 in 20 ml PBS, then received an IP injection of 10 3 SL1344-kan R immediately afterwards. Co-infection experiments were repeated using the reciprocal combination of strains, SL1344-kan R (oral) and SL1344 (IP), which had no effect on the trends observed. Monitoring fecal shedding of S. Typhimurium Individual mice were identified by distinct tail markings and tracked throughout the duration of infection. Between 2-3 fresh fecal pellets were collected directly into eppendorf tubes and weighed at the indicated time points. Pellets were resuspended in 500 ml PBS and CFU/gram feces were determined by plating serial dilutions on LB agar plates with the appropriate antibiotics. Low (,10 4 CFU/gram), moderate (,10 8 CFU/gram), and super shedder ($10 8 CFU/gram) mice were identified based on previously established criteria [33,37]. S. Typhimurium burden in blood and tissues Following collection of fresh fecal pellets, animals were sacrificed at the specified time points. Blood was collected by cardiac puncture and animals were euthanized by cervical dislocation. Sterile dissection tools were used to isolate individual organs, which were weighed prior to homogenization. The entire gastrointestinal tract was removed, and the small intestine was immediately separated from the distal gut and transferred to a new sterile petri dish. Visible PP (3-6/mouse) were isolated from the small intestine using sterile fine-tip straight tweezers and scalpels. PP, mLN, spleens, livers, and gall bladders were collected in 1 ml PBS. The small intestine, cecum, and colon were collected in 3 ml PBS. Homogenates were then serially diluted and plated onto LB agar containing the appropriate antibiotics to enumerate CFU/gram tissue. For coinfections with SL1344 and SL1344-kan R , several dilutions were plated to ensure adequate colonies (.100 CFU per sample) for subsequent patch plating to determine strain abundance. Genomic DNA extraction and WITS qPCR For WITS experiments, 300 ml of tissue homogenate was inoculated into LB broth containing streptomycin (200 mg/ml) and kanamycin (40 mg/ml) as a recovery method to enrich for low abundance strains. An UltroSpec 2100pro spectrophotometer (Amersham Biosciences, Piscataway, NJ) was used to obtain optical density readings of the resulting bacterial cultures. Genomic DNA (gDNA) was extracted from 2610 9 S. Typhimurium from each sample in duplicate using a DNeasy blood and tissue kit (Qiagen, 69506) as per the manufacturer's protocol for Gram-negative bacteria. All qPCRs were performed on an Applied Biosystems 7300 realtime PCR system. A 25 ml reaction contained 12.5 ml of FastStart SYBR Green Master Mix with Rox (Roche, 04913914001), 8 ml DNase/RNase-free water, 0.75 ml of forward and reverse (10 mM) primers (Table S1), and 3 ml of gDNA (1-10 ng). Standard curves were generated using gDNA from each W1-W8 strain. Reaction conditions were 50uC for 2 min; 95uC for 10 min; 40 cycles of 95uC for 15 s and 60uC for 1 min; followed by a dissociation stage of 95uC for 15 s, 60uC for 1 min, 95uC for 15 s, and 60uC for 15 s. Determining relative abundance of strains To determine presence of a WITS strain, the qPCR value had to be above a minimum threshold value. This measure of primer specificity was determined by a negative control matrix, in which a specific primer pair was tested on ,11.25 ng of non-template gDNA from each of the other 7 WITS strains. To test primer sensitivity, detection limits were determined by test plates containing known CFU of each strain. Briefly, colonies were washed off the plates with PBS and gDNA was extracted from plates with varying abundances of WITS (i.e. 1 CFU Strain A with 10 3 -10 5 CFU Strain B). qPCR was performed and revealed a detection limit of 1 CFU/strain amidst over 4800 CFU from nontarget strains. To verify that our method of broth recovery and qPCR analyses accurately rendered WITS abundances, we compared relative abundances of an equal mixture of culturegrown W1-W8 as determined by our qPCR strategy versus plating CFU of individual dilutions of each strain (Fig. S1B). Plating onto selective LB agar containing streptomycin (20 mg/ ml) and kanamycin (40 mg/ml) was used to determine strain abundances in co-infections, sequential challenges, and transmission experiments. In addition to patch plating a minimum of 100 CFU per sample, undiluted samples were plated on selective plates to increase detection limits. For super shedder mice, this permitted detection of a strain comprising just 0.00000001% of the total S. Typhimurium population. Determining equal fitness of WITS in vivo The strain relative abundances were determined for each tissue in all of the 19 mice infected with the 10 8 equal mixture of 8 WITS. The relative abundances of each WITS strain were analyzed by one-way ANOVA (parametric) and Kruskal-Wallis (non-parametric) tests in Prism statistical software. These analyses were performed for each tissue collected from infected mice. Nonsignificant P values indicated that a particular WITS was not under or over represented in any tissue type (Table S2). To further verify that certain WITS strains were not preferentially selected for, a control experiment was performed in which mice were orally infected with an inoculum comprised of a skewed WITS mixture ( Figure S2A). Underrepresented strains: W2, W3, W5, W6 (4.17%-7.32% of inoculum), overrepresented strains: W1, W4, W7, W8 (17.39%-20.94% of inoculum). Relative abundances of WITS in infected tissues were determined by qPCR after 35 days of infection. For each of the 8 WITS, defined bins were constructed for a range of strain relative abundances, with which the observed frequencies were used to generate histograms ( Figure S2B). Bray-Curtis dissimilarity analyses of WITS relative abundances in mouse tissues Bray-Curtis dissimilarity scores were computed to quantitatively compare Salmonella population compositions in different sites. The relative abundance (y) of each WITS (n) was compared between two tissue sites i and j. The Bray-Curtis dissimilarity (d BCD ) was calculated by: A value of 0 indicates an identical WITS composition between two sites, while a value of 1 signifies that two samples are completely dissimilar without any overlap in WITS representation. Sequential infections with established and competing strains For sequential infections in which the IP strain served as the initial strain, mice were first injected with 10 3 SL1344 and the infection was allowed to establish for 35 days. Following that time period, mice were challenged with an oral dose of 10 8 SL1344kan R . In experiments with sequential oral infections, mice first received 10 8 SL1344 orally by drinking. A persistent infection was allowed to establish for 102 days before oral challenge with 10 8 SL1344-kan R . This sequential oral infection was performed with the reciprocal order of strains, in which SL1344-kan R was given as the initial strain and SL1344 given as the challenge strain. Cohousing experiments Mice were infected orally with either 10 8 SL1344 or SL1344kan R and fecal shedding of Salmonella was monitored over 14 days prior to the start of the experiment. A SL1344-kan R super shedder donor was then cohoused with mice previously infected with SL1344, in addition to a naïve uninfected mouse as a control. Cohousing was continued for 10 days before the super shedder donor was removed. The reciprocal cohousing experiments were performed in which a SL1344 super shedder donor was cohoused with mice previously infected with SL1344-kan R . Kanamycin treatment The aminoglycoside was administered orogastrically in a single dose of 20 mg (Sigma Aldrich, K4000) dissolved in 200 ml of water. Mice were transferred to new cages with autoclaved bedding, chow (Harlan, Teklad 2018S), and water at the time of administration. Statistical analyses Prism (GraphPad) was used to create all figures and perform all statistical analyses. Intergroup comparisons of Bray-Curtis dissimilarity values (e.g. spleen-cecum versus colon-cecum) were analyzed by paired t-tests. Comparisons of oral and IP strain abundances within the same group of mice were evaluated with Wilcoxon matched-pairs signed rank tests. Differences in CFUs and strain composition between groups were examined by unpaired nonparametric Mann-Whitney tests. Significance was defined by p#0.05. Figure S1 WITS enumeration and qPCR analysis strategy to determine relative abundances during infection. Experimental design of WITS mouse infections and verification of strain quantification strategy. A) Growth curves of each strain in LB broth with appropriate antibiotics, no significant differences observed, validating our broth recovery approach. B) Percent strain composition of an equal mixture of culture-grown W1-W8 as determined by qPCR or plating of individual strain dilutions. No significant differences were observed. Results are representative of 3 independent experiments. C) WITS experimental design. Fecal samples were collected throughout the experiment, after which animals were sacrificed and tissues collected. Samples were plated on selective LB agar to enumerate total S. Typhimurium CFU and inoculated into selective LB broth in preparation for genomic DNA extraction. Quantitative PCR (qPCR) was performed to determine WITS abundances. D) Enumeration of WITS CFU in various host tissues by plating on LB agar containing kanamycin. Each circle represents an individual mouse (n = 19). Figure S8 In the presence of an established intestinal strain, challenging Salmonella are cleared from systemic and intestinal tissues. Mice from sequential infections performed in Figure 6. SL1344 was used as the initial strain and SL1344-kan R was used as the challenging strain. Animals were sacrificed at the indicated time points and strain CFU were enumerated in tissues. Challenging strains (gray) were not detected. Limit of detection was determined by both differential plating and patch plating 100 CFU onto antibiotics; indicated by gray dashed lines. A) Mice were infected IP with 10 3 SL1344 (Strain 1, black) for 35 days, followed by oral challenge with 10 8 SL1344-kan R (Strain 2, gray). Animals were sacrificed 7 days postchallenge (42 dpi), n = 10. B) Mice were orally infected with 10 8 SL1344 (Strain 1, black) and challenged orally with 10 8 SL1 344-kan R (Strain 2, gray) after 102 days. Animals were sacrificed 35 days post-challenge (137 dpi), n = 10. C) Recipient mice were orally infected with 10 8 SL1344 for 14 days before co-housing with a supershedder SL1344-kan R donor. Animals were sacrificed 10 days post-cohousing. Established (SL1344, black) and donor (SL1344-kan R , gray) strain CFU were enumerated, n = 8. (TIF) Figure S9 Established strains are resistant to supercolonization, and clearance of the challenging strain occurs more rapidly in super shedders. A) Reciprocal order of strains from those used in sequential oral infections in Figure 6B. Mice were first inoculated with 10 8 SL1344-kan R (gray) by drinking, which established a persistent infection for 60 days. Animals were subsequently challenged with 10 8 SL1 344 (black) by drinking. Left: Fecal CFU (mean, SD) of established strain SL1344-kan R and challenge strain SL1344 in feces. Right: Geometric mean of strain CFU in feces. Data are representative of two separate experiments (n = 6). B) Analysis of mice orally infected by sequential initial and challenge strains (described in above and in Figure 6B) based on prior shedding status. Percent abundance of the initial strain shed in feces in low (LS, n = 12), moderate (MS, n = 26), and super (SS, n = 8) shedder mice was determined for the specified days post-challenge. *p,0.05, ***p,0.001, ****p,0.0001, unpaired Mann-Whitney tests. (TIF) Figure S10 Salmonella SL1344-kan R can be rapidly transmitted to naïve mice, and establishes a persistent intestinal infection exerting intraspecies colonization resistance against SL1344 from an infected donor. A) Super shedder donors rapidly transmit SL1344-kan R to naïve uninfected mice. 14 days prior to cohousing, potential donor mice were infected orally with 10 8 SL1344-kan R . Fecal Salmonella CFU/gram were tracked and a super shedder donor (gray asterisk) was identified, then cohoused with a recipient naïve mouse for 24 hours. Fecal shedding levels of SL1344-kan R from the recipient mice (open circles) were then tracked over 21 days. Data are representative of two independent experiments (n = 2 donors, 2 naïve recipients). B) Reciprocal order of strains from those used in cohousing experiments in Figure 6C. Mice were first infected orally with 10 8 of either SL1344 or SL1344-kan R 14 days prior to cohousing. A SL1344 super shedder donor (black asterisk) was cohoused with SL1344-kan R infected recipient mice and removed after 18 days. Geometric means of Salmonella CFU/gram feces in recipient mice shedding the established SL1344-kan R strain (gray) or challenging donor SL1344 strain (black). Data are representative of two independent experiments (n = 2 donors, 6 recipients). (TIF) Figure S11 Disruption of microbiota-mediated colonization resistance with streptomycin increases fecal shedding of intestinal Salmonella, but does not permit reseeding by the systemic strain. Mice were co-infected with 10 8 SL1344 orally and 10 3 SL1344-kan R IP. A single dose of 5 mg streptomycin in 100 ml water was delivered by oral gavage after 30 days of co-infection (n = 5). Geometric means of oral (black) and IP (gray) strain CFU shed per gram feces. Limit of detection is 10 CFU/gram feces. (TIF) Figure S12 A Salmonella hydrogenase mutant is cleared from feces and tissues after 15 days of infection. Single oral infections were carried out in mice with 10 8 WT SL1344kan R (black), DSPI1DSPI2 (blue), or DhybDSPI1DSPI2 (red). Data are representative of two independent experiments (n = 6/ group). A) Salmonella CFU in mouse tissues after single oral infections with either WT or DhybDSPI1DSPI2 (red) after 15 days of infection. Each circle represents an individual mouse, lines at means. *p = 0.0152, **p = 0.002, unpaired Mann-Whitney tests. B) Fecal shedding of Salmonella was monitored at the specified time points post-infection (mean, SD). No significant differences in Salmonella CFU/gram feces were observed between WT and DSPI1DSP-2 oral infections. *p,0.0411, **p,0.0022, unpaired Mann-Whitney tests. (TIF) Figure S13 Total Salmonella in feces of mice co-infected with mutant strains. A) Mice in the Dhyb group received 10 8 DhybDSPI1DSPI2 orally and 10 3 WT SL1344 by IP. Control mice received 10 8 WT SL1344-kan R orally and 10 3 WT SL1344 by IP. Data are representative of two independent experiments (control n = 10, Dhyb n = 7). Total Salmonella CFU per gram feces, comprised of both oral and IP strains, detected over 15 days of co-infection for both control and Dhyb mouse groups (mean, SD). Comparison of total fecal Salmonella CFU between day 7 and day 15 in the Dhyb co-infected group is displayed in red (*p = 0.0373). *p day4 = 0.00, *p day7 = 0.0247, p*** = 0.0004, unpaired Mann-Whitney tests. B) Mice in the DSPI group received 10 8 DSPI1DSPI2 orally and 10 3 WT SL1344 by IP. Control mice received 10 8 WT SL1344-kan R orally and 10 3 WT SL1344 by IP. Data are representative of two independent experiments (n = 10/ group). Total Salmonella CFU per gram feces, comprised of both oral and IP strains, detected at the specified time points throughout 70 days of co-infection for both control and DSPI mouse groups (mean, SD). ns = not significant, unpaired Mann-Whitney tests. (TIF) Table S2 Statistical analyses of WITS abundances in all sites sampled. qPCR analyses were performed on all specified tissues collected from all mice (n = 19) to determine WITS relative abundances. The relative abundance of each WITS strain in each tissue was analyzed by one-way ANOVA (MS = mean square, F = F statistic, DFn = degrees of freedom numerator, DFd = degrees of freedom denominator) and Kruskal-Wallis tests. Significance defined by p#0.05. (XLSX ) Table S3 Total burden of Salmonella within intestinal tissues in the co-infection model. Mice were co-infected with SL1344 and SL1344-kan R , one strain orally and the other by IP injection. Mice were sacrificed after 30 days of infection, and CFU per gram tissue of the oral strain was converted into total CFU within the entire small intestine, cecum, or colon. (XLSX) Mice were either fed a control or lithogenic gallstone-inducing diet for 11 weeks. Animals were then co-infected with 10 8 SL1344 and
v3-fos-license
2021-04-28T01:16:17.192Z
2021-04-26T00:00:00.000
233407658
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2021)194.pdf", "pdf_hash": "b07318ddbc3e682e5d15985b226dee4e8e25f0c5", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1323", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "b07318ddbc3e682e5d15985b226dee4e8e25f0c5", "year": 2021 }
pes2o/s2orc
Partition Functions of Chern-Simons Theory on Handlebodies by Radial Quantization We use radial quantization to compute Chern-Simons partition functions on handlebodies of arbitrary genus. The partition function is given by a particular transition amplitude between two states which are defined on the Riemann surfaces that define the (singular) foliation of the handlebody. The final state is a coherent state while on the initial state the holonomy operator has zero eigenvalue. The latter choice encodes the constraint that the gauge fields must be regular everywhere inside the handlebody. By requiring that the only singularities of the gauge field inside the handlebody must be compatible with Wilson loop insertions, we find that the Wilson loop shifts the holonomy of the initial state. Together with an appropriate choice of normalization, this procedure selects a unique state in the Hilbert space obtained from a K\"ahler quantization of the theory on the constant-radius Riemann surfaces. Radial quantization allows us to find the partition functions of Abelian Chern-Simons theories for handlebodies of arbitrary genus. For non-Abelian compact gauge groups, we show that our method reproduces the known partition function at genus one. Introduction Chern-Simons gauge theory connects many different topics in mathematics and physics. On closed manifolds it is a topological theory that can be used to compute knot invariants [1], while on manifolds with boundaries it acquires additional boundary degrees of freedom that connect it to gravity in three dimensions [2,3,4,5] and to the theory of the fractional quantum Hall effect [6,7]. As remarked in [8], one intriguing feature distinguishes Chern-Simons theory from conventional topological field theories, such as topological Yang-Mills theories on Riemann surfaces or four-manifolds: the latter can be interpreted in terms of the cohomology ring of some classical moduli space of connections, while Chern-Simons, in general, cannot. In fact Chern-Simons theory is intrinsically a quantum theory that is best described by a Hilbert space. When the three-manifold on which the theory is defined has special characteristics, the theory simplifies and may become computable. One remarkable example is the case of Seifert manifolds studied in [8]. Another case which could lead to exact computations is that of handlebodies [9]. The latter is interesting for various reasons. One of the most fascinating is that in order to test any conjectured holographic dualities relating pure gravity in three dimensions to a conformal field theory [10] (or an ensemble average thereof [11,12,13]), one would need to know the partition function of SL(2, C) Chern-Simons theory on a negatively curved manifold, whose boundary is a Riemann surface. Handlebodies are the simplest such manifolds for a fixed genus of the boundary [14]. The reason why one may think that a Chern-Simons theory may be exactly soluble on handlebodies is that these spaces are almost factorized as the topological product [0, R] × Σ. We say "almost" because the closed Riemann surface Σ defining the foliation of the space becomes singular at one of the extrema of the interval [0, R]. The simplest example of this foliation is the solid torus handlebody, that is the direct product of a disk D 2 and a circle S 1 . Its singular foliation is D 2 × S 1 ≈ [0, R] × T 2 . The ≈ sign means that the two-torus leaf T 2 = S 1 × S 1 becomes singular at the end r = 0 of the interval [0, R], where one of the two S 1 cycles degenerates. By interpreting r ∈ [0, R] as time, we can quantize the theory and define a Hamiltonian that evolves in r. This allows us to rewrite the partition function of the theory as a transition amplitude between some initial state |i at r = 0 and some final state |f at r = R. We will show in this paper that the condition that the initial state is a "shrunken," degenerate surface imposes a restriction on the initial state that, combined with the constraints descending from gauge invariance and the independence of the scalar product from the complex structure, completely fixes the partition function. Let us describe now more precisely the procedure that we shall follow and the organization of this paper. We study partition functions of Chern-Simons theory of compact gauge groups on handlebodies using a radial quantization. First, we establish the equivalence between three quantities: Euclidean path integrals with holomorphic boundary condition, transition amplitudes under radial evolution with a coherent state as the final state, and wave functions integrated over the gauge orbit. Second, we map a Wilson loop inserted in a path integral to a "blown-up" operator defined on the Riemann surface, which in the radial quantization acts on a seed wave function and defines an initial state of definite holonomy along the contractible cycles. Together with an appropriate choice of normalization, this procedure singles out a unique vector in the Hilbert space obtained by a canonical quantization of Chern-Simons theory on the Riemann surface. Moreover, we find that requiring that such "blown-up" operator must be gauge-invariant corresponds to selecting a particular class of framings of the original Wilson loop. We are thus able to establish a precise state-operator correspondence associating each vector in the Hilbert space of the canonically quantized Chern-Simons theory on Σ to an explicitly computed partition function with insertions of Wilson loops. We first consider the Abelian U (1) gauge group on the solid torus, then on handlebodies of arbitrary genus and finally we study general compact simple groups on the solid torus. In Section 2, we study the U (1) Chern-Simons theory, first on a torus handlebody and then on handlebodies defined by higher-genus Riemann surfaces. In Section 3, we move on to consider the case of a general non-Abelian simple compact Lie group on the torus handlebody. Appendices A and B respectively summarize essential facts about the Riemann theta function and quadratic differentials on a Riemann surface. The Abelian Case To study Chern-Simons theory with gauge group U (1) on the genus-g handlebody M , we define a singular foliation on M as M = Σ × [0, R]. The constant-radius leaves are closed Riemann surfaces, Σ, and the initial surface Σ 0 at r = 0 is degenerate. The final surface Σ R is at r = R. On Σ we specify the complex structure by giving the period matrix Ω, for which Σ has area det Im Ω, and which defines the basis {ω I |I = 1, . . . , g} of Abelian differentials and the local complex coordinate z on Σ. Since we will be considering either Abelian gauge fields at genus g or non-Abelian gauge fields at genus one, the period matrix will suffice to define the complex structure; we will not need to give explicit definitions of either Teichmüller or moduli space coordinates. We also use the notation ω I = ω I (z)dz, and when it can be done unambiguously we keep the index I implicit. The integration measure on Σ is normalized to d 2 x = dz ∧ dz/(−2i), so that Σ d 2 xω I (z)ω J (z) = (Im Ω) IJ . One of our goals is to establish the equivalence of three different quantities. The first is a path integral, in which on the final surface Σ R we impose a holomorphic boundary condition that fixes the antiholomorphic part Azdz of the gauge connection A, while on the initial surface Σ 0 we fix the component of A along the contractible cycles. The second is a transition amplitude under radial evolution, from an initial state of definite holonomy along the contractible cycles to a coherent final state. The third is a wave function in a coherent state basis, obtained by integrating over the gauge orbit a seed wave function which is an eigenstate of the holonomy operator along the contractible cycles. These quantities will be compared to the Chern-Simons partition functions that are identified with the wave functions obtained by a holomorphic quantization on the Riemann surface Σ [15][16] [17]. The basis wave functions spanning the gauge-invariant Hilbert space were explicitly given in [16] as The complex number u defines the harmonic part of the differential Azdz while the integervalued vector µ labels the independent vectors spanning the basis of the Hilbert space. Moreover, k is the Chern-Simons level, χ is a periodic function on Σ, and θ a b (u, Ω) is the Riemann theta function with characteristics [18], as defined in (A.1). F (Ω) 1 2 is the "holomorphic square root" of the scalar Laplace determinant on Σ [19], The obstruction to holomorphic factorization [20], S ZT L , is the nonholomorphic part of the Liouville action defined by Zograf and Takhtajan (see [19,21]). For genus one on the flat metric, F (Ω) The torus case As a warm-up, we first consider the case where M is the solid three-dimensional torus. On each constant-radius surface Σ = T 2 (which is a two-torus) the period matrix is the modular parameter τ ≡ τ 1 + iτ 2 and defines the global holomorphic coordinate z on the torus. From this, we can define local real coordinates x 1,2 by z ≡ x 1 + ix 2 , where x 1 ∼ x 1 + 1 parametrizes the contractible cycle on M . The restriction to T 2 of a one-form field A, . (2.4) Although these real coordinates are also valid locally on higher-genus Riemann surfaces, for those cases we will use a better description, given in terms of Strebel differentials [22]. In the next subsections, we establish the equivalence between the three quantities mentioned earlier: the partition function given as a path integral, the transition amplitude, and the gauge invariant wave function obtained from an appropriate "seed" wave function. The path integral We impose a holomorphic final condition, fixing Azdz| Σ R = ∂zχdz + iπuτ −1 2 ω, as in (2.2); on the torus, ω = dz. In addition, as initial condition we fix the component of A along the In (2.6), the boundary term 1 ; τ ) is in order. We are considering the gauge group U (1), not R. The distinction is that U (1) includes large gauge transformations defined on the boundary Σ R of the handlebody M . A large gauge transformation that has a non-trivial winding along a homotopy cycle of Σ R that is contractible in M cannot be extended smoothly to M . This implies that the partition function is a sum of terms that are not related by bulk gauge transformations. Integrating out A r , the path integral imposes F 12 = 0 [17], so we get The standard procedure is to express (A 1 dx 1 + A 2 dx 2 ) as a flat connection, resulting in a chiral Wess-Zumino-Witten path integral on the final surface [17]. The transition amplitude We turn now to the coherent state method. The first term in (2.7) (the bulk term) defines the symplectic structure of the theory, implying that A 1 and A 2 are conjugate variables and satisfy upon quantization the equal-radius canonical commutation relation: Here δ (2) (x, y) denotes the delta function with respect to the (x 1 , x 2 )-coordinates. Moreover, define the A 1 -eigenstate |A 1 as a translation from the A 1 = 0 eigenstate |0 effected by applying the conjugate momentum 2 Here too C is a normalization constant, which we leave arbitrary for the time being. Using (2.9) together with (2.4), we can construct the wave function of the coherent state |A z ) in the |A 1basis, which satisfies the defining properties (with Az = A * z ), Let us consider the transition amplitude, from an A 1 -eigenstate |A (0) 1 on the initial surface Σ 0 , to a coherent state |A R z ) on the final surface Σ R , as we radially evolve the system with the Hamiltonian read off from (2.7): This is identical to the partition function (2.7), Z(Az| Σ R , A 1 ; τ ), with the boundary term f B [A 1 ] = 0, and A R z = Az| Σ R . In both cases, we have imposed the initial condition 1 . The equivalence between (2.7) and (2.14) holds for arbitrary genus because it only relies on a local decomposition of the complex coordinate z into real coordinates that is independent of the topology of the surface Σ. From now on, without ambiguity, we drop the superscript R from A R z . Next, we evaluate Eq. (2.14) and find out what it computes for the torus case. We parametrize the A 1,2 that solve the constraint F 12 = 0 by where λ 0 (r, x 1 , x 2 ) is a periodic function on Σ, and the shift in A 1 by 2πn with n ∈ Z comes from the large gauge transformations that are singular inside the bulk. Note that a shift in λ 0 (r, x 1 , x 2 ) by any x 1 -independent function f 2 (r, x 2 ) also solves F 12 = 0 and leaves the integrand of the path integral invariant, thus the x 1 -independent modes can be factored out of the path integral and consistently discarded 4 . On the other hand, shifting λ 0 (r, x 1 , x 2 ) by some f 1 (r, x 1 ) changes the boundary action, so these modes cannot be factored out from the path integral. We restrict our initial condition to A 1 | Σ 0 = a 1 (0) with a 1 (0) = constant-this is a natural choice since the initial surface is in fact degenerate, so λ 0 (r = 0, is independent of x 1 . The integration measure in (2.14) satisfies [17] i.e. the change of variables (2.15) has unit Jacobian. Here a prime denotes discarding x 1independent functions. Moreover, as in (2.2), Az = ∂zχ + iπuτ −1 2 . The amplitude (2.14) becomes In arriving at (2.17), we integrated out a 2 (r) to obtain an r-independent a 1 (r) = a 1 . Together with the initial condition A 1 (r = 0) = a 1 (0), this means a 1 (R) = a 1 (0). We also defined The path integral on λ 0 equals det −1/2 (− k 2π ∂z∂ 1 ). We are still free to choose the constant C. Besides removing ultraviolet divergences in the functional determinant, it can be further fixed by requiring that eq. (2.18) be a section of a projectively flat connection on the moduli space of complex structures [15]. This is simply the requirement that the scalar product of the base wave functions (2.18) must be independent of the complex structure. By making this choice we get 1/ F (Ω) genus g = 1 torus Σ = T 2 , they are given by (2.19) where Az = ∂zχ + iπuτ −1 2 , and µ = 0, 1, . . . , k − 1. We cannot reabsorb this difference into a redefinition of the constant C without giving up one of the objectives of our paper, which is to establish a state-operator correspondence associating each state obtained by applying Wilson loops to the vacuum to the partition function of Chern-Simons on a solid torus containing the same Wilson loop. So, once we normalize the vacuum and the vacuum partition function, we cannot further normalize separately the other partition functions. What we can do is to understand where the discrepancy comes from and try to fix it by appropriately changing the definition of the Wilson loop operator. To find the meaning of this discrepancy, we consider a different basis on the torus. We define global coordinates (φ, t) which both have unit period, so that z = φ + τ t, φ ∼ φ + 1, . (2.20) In particular, are related to the previous conjugate variables (A 1 , A 2 ) by a canonical transformation which simply shifts A 2 by a term linear in A 1 . The canonical commutation relation is Here δ (2) (x, y) is again the delta function in the (x 1 , x 2 ) coordinates. Similarly to (2.9), we define the A φ -eigenstate |A φ by translating the A φ = 0 eigenstate |0 ≡ |0 , but this time with the operator t , The eigenstates |A φ and |A φ are related by a pure phase, By using (2.20), we see that the wave function of the coherent state |A z ) in the |A φ -basis differs from (2.10) Repeating the same calculations as above, one finds that where we normalized C as in Eq. (2.18). This is exactly one of the (2.19) when we set a 1 (0)/2π = µ/k. Thus, we learn that to get an answer holomorphic in the complex structure τ we need a particular choice of canonical variables (A φ , A t ), or equivalently a particular choice of eigenstate |A φ . In terms of the path integral Z(Az| Σ R , a φ (0); τ ), this corresponds to a particular choice of the boundary term, namely: The gauge-invariant wave function Under a gauge transformation Here λ can include large gauge transformations. Thus, starting from any "seed" wave function Ψ 0 [Az] we can integrate over the gauge group to construct a gauge-invariant wave function: This formula includes a sum over large gauge transformations, so the most general λ is where λ 0 is periodic on the torus, while the multivalued large gauge parameter λ enters in the integral only through its derivatives, which are single-valued on the torus; they are given by If we take (Az|a 1 (0) to be a seed wave function Ψ 0 [Az] which is not necessarily gauge-invariant and integrate over all gauge transformations including the large transformations m, n ∈ Z, we reproduce the theta function in (2.19). To see this, we impose again the conditions (2.2) which is exactly Z(Az, µ; τ ) in (2.19). In arriving at (2.35), the constant C was fixed as in (2.18) using the normalization condition C det −1/2 (− k 2π ∂z∂ 1 ) = η(τ ) −1 and we used that k/2 ∈ Z >0 , m ∈ Z and ka 1 (0)/2π = µ ∈ Z, so the summand does not depend on m, Because of this, in the last line we discarded the infinite sum over m ∈ Z. Similarly, the same calculation done with (Az|a 1 (0) as seed wave function reproduces (2.17). Notice that discarding the sum over m means simply to remove identical gauge copies from the definition of the gauge-invariant wave function. This is a standard part of the construction of a gauge invariant, normalizable state or operator using an integral (and/or sum) over gauge transformations. Its analog in the context of three-dimensional gravity is explained for instance in [14]. Blowing up Wilson loops One can insert into the path integral a gauge-invariant Wilson loop operator defined along a loop C on M , asŴ (2.37) P means path-ordering, and the U (1) charge µ is integer-valued such thatŴ µ [C] is invariant under large gauge transformations defined on C. We restrict C to be a path that runs along the non-contractible cycle of M , and without loss of generality put it at the origin r = 0 of the solid torus. We would like to mapŴ µ [C] to a "blown-up" operator in radial quantization, which acts on a state defined on Σ. To this end, recall that the A 1 -eigenstate |A 1 is the translation of the A 1 = 0 eigenstate |0 by the operator 2 given in (2.9), The initial surface Σ 0 is degenerate but the Wilson loop operatorŴ µ [C 2 ], with C 2 at r = 0 running along the x 2 -direction, can be "blown-up" and identified with the translation operator defined in (2.38) acting on the Hilbert space on Σ, Alternatively, choosing A t as the conjugate momentum from (2.22) we have We can also define a "blown-up" version of the Wilson loop operatorŴ µ [C t ], with C t at r = 0 running along the t-direction, and identify it with the translation operator in (2.41), (2.43) Gauge invariance and framing Both C 2 and C t trace the same closed loop at the origin, though with twists differing by τ 1 . One may wish to assign a framing to this loop by defining a vector field on it [1,24], thereby extending this loop into a ribbon. Such a vector field must be periodic under the global identification (x 1 , x 2 ) ∼ (x 1 + τ 1 , x 2 + τ 2 )-now that we are away from the degenerate r = 0 surface. The simplest choice is that corresponding to C t , while that corresponding to C 2 does not respect the periodicity. In the language of the "blowing-up" procedure, this fact translates to demanding that Note that theŴ µ [Σ, t], µ ∈ Z, are not the only gauge-invariant operators. The most general gauge invariant "blown-up" operator on Σ with constant coefficients 6 takes the form 6 We will drop this restriction in the higher-genus cases. Higher genus For partition functions on higher-genus handlebodies, it is convenient to make use of certain special quadratic differentials on Riemann surfaces, reviewed in Appendix B. Specifically, we pick a Strebel differential ϕ, which is a quadratic differential on the Riemann surface Σ, holomorphic in the complex structure; locally, ϕ = h(z)dz 2 where h(z) is holomorphic. The existence of such differentials is proven in [22]. We do not need to know their precise form. All we need from a Strebel differential is the fact that it foliates the Riemann surface Σ into horizontal trajectories, which are closed curves given by The Strebel differential ϕ also defines a metric on Σ, which takes the form This metric may have zeros or singularities, which define the singular points of the foliation. We define next a vector field v of unit norm with respect to (2.52), whose integral curves are Figure 1: A schematic illustration of a Strebel differential on a genus-two Riemann surface, which defines horizontal trajectories denoted by blue loops. The vector field v that generates the horizontal trajectories is denoted by red arrows. the horizontal trajectories so that We use the vector field v to define cycles on a higher-genus Riemann surface Σ, that are contractible on the corresponding handlebody M . On the torus, v =v = 1 and ∂ h = ∂ 1 . The square root in (2.53) can cause generically an obstruction to defining a global holomorphic vector field on Σ. On the other hand, we do not need v to be holomorphic, so we can always rescale v by a common factor: v → v with a smooth real function. The equations that we will find in the next subsections depend only on the ratiov/v, which is not affected by the rescaling. The function can even vanish on subsets of measure zero that are transverse to the horizontal trajectories without altering the ratiov/v. By making vanish somewhere on Σ, a nonholomorphic vector field can be defined everywhere on Σ. The horizontal trajectories and the vector field v that generates them are illustrated schematically in Figure 1. The vacuum partition function We would like to generalize the third approach used in the torus case: start from a non-gauge As for genus one, here the constant C may depend on the complex structure and is fixed by properly normalizing the vacuum partition function. We show next that integrating it over the gauge orbit does result in a particular vector in the Hilbert space (2.1) obtained from Kähler quantization, holomorphic in the complex structure Ω. In particular, we will see that, after integration, the vector v appears only in the Weyl anomaly. Given any seed wave function Ψ 0 , the gauge integral is given by a generalization of (2.29) on the torus. It reads We begin by decomposing where both χ, λ 0 are single-valued on Σ. The multivalued function λ appears everywhere only through its derivatives, which are also single-valued on Σ. Next we evaluate the integrand: (2.60) (On the torus, this reduces to (2.33) with a 1 (0) = 0). Let us look now at the terms involving Thus, we see that the vector field v indeed drops out, except in the fluctuation term. Substituting the definitions (2.58) into Ψ[Az] and repeating the same calculation done in the torus case, we arrive at Note that the constant C also reabsorbs a term that contains a Weyl anomaly and therefore, because of (2.53), a dependence on v. We also used k/2 ∈ Z and m, n ∈ Z g , and discarded the trivial sum over m, that is the sum over large gauge transformations that can be extended to the bulk and under which the wave function is invariant. Wilson loops On a higher-genus handlebody, besides Wilson loops that can be regarded as "world histories of mesons," there is also another class of gauge-invariant observables, which correspond to the "world histories of baryons" running along the non-contractible cycles; see [24]. For the Abelian case that we have considered here, however, the fusion rule is trivial, so those "baryon world histories" can be decomposed into disjoint Wilson loops running along the non-contractible cycles of the handlebody. Therefore, it suffices to consider only standard Wilson loops. We would like to generalize the "blowing-up" of Wilson loops that we studied on the torus in Section 2.1.1 to higher genus. Consider the loops C I running along the g non-contractible cycles of M and endowed with charges µ I ∈ Z k . The resulting Wilson loops are then "blown up" into operatorsŴ [Σ, w] on Σ, parametrized by real one-forms (2.70a) Here η is a real single-valued function on Σ, and λ is a large gauge transformation (2.58). As discussed in Section 2. Using again the Baker-Campbell-Hausdorff formula (2.46), one gets We repeat the calculation done in the last subsection to evaluate the gauge integral (2.56). The integrand is The terms containing λ 0 arẽ The saddle point λ 0,cl at whichS is extremal satisfies the equation of motion So, once again, v drops out of the action-except in the fluctuation term which also gives the Weyl anomaly. Moreover, the function η in w drops out as well. Evaluating the path integral in the same way as before, we finally get Here used the same normalization for C as in eq. (2.67) together with k/2 ∈ Z, m, n ∈ Z g and µ ∈ Z, and discarded the trivial sum over m. Since η drops out eventually, we can repeat the analysis we performed in the torus case. Namely, In the special case N I = N IJ µ J for some symmetric matrix N with integer entries, this phase exp(+(iπ/k)µN µ) is naturally interpreted as the framing anomaly. The non-Abelian Case We consider now the non-Abelian case, with a compact, simply-connected and simple Lie group G on a solid torus. In this section M is always the torus handlebody and Σ = T 2 . By generalizing the equation (2.24) found in the Abelian case, we will consider the A φ -eigenstate |A φ translated by the conjugate momentum A t . In the coherent state basis, it reads Solving the constraint F 12 = 0 by [17] A On the torus a φ (r) and a t (r) commute so they are elements of the Cartan subalgebra h of g; this is not true in general for higher genus. Moreover, by integrating out a t (r) we get a φ (r) = a φ (0), so the amplitude (3.3) becomes With an appropriate, Az-independent choice of C, this is the chiral Wess-Zumino-Witten path integral. For a φ (0) = 2πµ/k where µ is an integral weight of G and Az = iuτ −1 2 , ref. [26] shows that the path integral gives the Weyl-Kac character χ µ,k (u, τ ): where ρ and h ∨ are respectively the Weyl vector and the dual Coxeter number of g. The Weyl-odd theta function is defined as where W is the Weyl group of G and (w) is the signature of w ∈ W . θ µ,k (u, τ ) is the level-k theta function for the Lie algebra g, whose definition is recalled in (A.3). Wilson loops The Wilson loop operator of the representation generated by the integral highest weight µ of G, along a loop C of constant radius in M , iŝ (3.10) In the last equality, we stripped off the pure gauge part of (recall the definition A i = g −1 a i g + g −1 ∂ i g) due to the trace in the definition ofŴ µ [C], so we only need to look at the equal-radius canonical commutation relation ofâ i (r), which we read off from (3.5): Here we have expanded a φ,t (r) = rank(g) j=1 a j φ,t (r)H j in the Cartan-Weyl basis {H j } of the Cartan subalgebra h of g, where j, l = 1, . . . , rank(g). For the loop C t at r = 0 running along the t- (3.12) As in the Abelian case, we map (3.12) to a "blown up" gauge-invariant operatorŴ µ [Σ] defined on Σ, which is to be identified with the translation operator by the conjugate momentum a j t , acting on the a φ = 0 eigenstate |0 . Sinceâ t is constant on Σ, the Wilson loop is simply given The first equality is the character ofâ t as an element of h. This is expressed as a Weyl character in the second equality. We recall the latter's definition: (3.14) where µ are the weights in the weight system Ω µ of the highest weight µ, which span a highestweight representation of G. By the Weyl character formula, (3.14) can be written as a ratio of sums over the Weyl group W of G: Because the radial evolution is linear in the initial state, this identity holds if the corresponding identity is true for the Weyl character of the Lie algebra, µ ∈Ωµ This should be understood as an equality in terms of the Weyl character formula (3.15). Intuitively, this identity should hold due to the fact that all the weights (µ + ρ) with µ ∈ Ω µ , except for the highest weight µ, pair up under simple Weyl transformations. Partition function as a gauge-invariant wave function Here we proceed in the same way as in the Abelian case. A wave function transforms as Since we consider a simply-connected group G, the gauge group G is connected. Similarly, starting from a wave function Ψ 0 [Az] that is not gauge-invariant, we can construct a gaugeinvariant wave function by integrating over the gauge group: Taking Ψ 0 [Az] = (Az|a φ as the seed wave function in (3.1) and after a quick calculation, one recovers the chiral Wess-Zumino-Witten path integral (3.6). In other words, radially evolving the wave function is equivalent to integrating over the gauge group, which results in a gaugeinvariant wave function. B Quadratic Differentials We summarize essential facts about quadratic differentials on a Riemann surface from Strebel [22] and Hubbard & Masur [27]. Consider a compact Riemann surface Σ of genus g and n punctures, endowed with a complex structure which defines a local complex coordinate denoted by z. A (meromorphic) quadratic differential ϕ on Σ is a (2, 0)-meromorphic differential; it locally takes the form where h(z) is meromorphic, and under a holomorphic change of coordinate z →z(z), it transforms by the chain rule as z →z(z), h(z) →h(z) = dz dz 2 h(z), so that ϕ =h(z)dz 2 = h(z)dz 2 . (B.2) When h(z) is holomorphic, then ϕ is a holomorphic quadratic differential. On a closed genus-g > 1 Riemann surface without punctures, the complex dimension of the space of all holomorphic quadratic differentials is (3g − 3), as a result of the Riemann-Roch theorem. Quadratic differentials find applications in physics, especially in conformal field theory and string field theory (see e.g. [28,29,30,31]), because they provide a convenient foliation for a Riemann surface Σ. Given a meromorphic quadratic differential ϕ, a horizontal trajectory is a non-self-intersecting continuous loop on which ϕ is real and positive, while a vertical trajectory is a non-self-intersecting continuous loop on which ϕ is real and negative. Equivalently, on a local patch U of Σ with complex coordinate z, and a base point p 0 ∈ U , we can define a local natural complex coordinate w on p ∈ U by w(p) ≡ Then, on a horizontal (vertical) trajectory, w has constant imaginary (real) part. A critical point of a ϕ meromorphic on Σ is a zero or a pole of ϕ, while all other points on Σ are called regular points. A critical trajectory is a horizontal trajectory that joins critical points. In general, a zero of order n is the endpoint of some (n + 2) critical trajectories. A quadratic differential ϕ defines a metric on Σ, which is locally given by For g = 1, i.e. the torus, a Strebel differential ϕ obviously exists: ϕ = dz 2 .
v3-fos-license
2017-08-02T19:19:10.830Z
2016-06-30T00:00:00.000
788539
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00335-016-9648-5.pdf", "pdf_hash": "30c0f43e92a8017a3529d1444ea45ad16cbed303", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1324", "s2fieldsofstudy": [ "Biology" ], "sha1": "30c0f43e92a8017a3529d1444ea45ad16cbed303", "year": 2016 }
pes2o/s2orc
Unraveling the message: insights into comparative genomics of the naked mole-rat Animals have evolved to survive, and even thrive, in different environments. Genetic adaptations may have indirectly created phenotypes that also resulted in a longer lifespan. One example of this phenomenon is the preternaturally long-lived naked mole-rat. This strictly subterranean rodent tolerates hypoxia, hypercapnia, and soil-based toxins. Naked mole-rats also exhibit pronounced resistance to cancer and an attenuated decline of many physiological characteristics that often decline as mammals age. Elucidating mechanisms that give rise to their unique phenotypes will lead to better understanding of subterranean ecophysiology and biology of aging. Comparative genomics could be a useful tool in this regard. Since the publication of a naked mole-rat genome assembly in 2011, analyses of genomic and transcriptomic data have enabled a clearer understanding of mole-rat evolutionary history and suggested molecular pathways (e.g., NRF2-signaling activation and DNA damage repair mechanisms) that may explain the extraordinarily longevity and unique health traits of this species. However, careful scrutiny and re-analysis suggest that some identified features result from incorrect or imprecise annotation and assembly of the naked mole-rat genome: in addition, some of these conclusions (e.g., genes involved in cancer resistance and hairlessness) are rejected when the analysis includes additional, more closely related species. We describe how the combination of better study design, improved genomic sequencing techniques, and new bioinformatic and data analytical tools will improve comparative genomics and ultimately bridge the gap between traditional model and nonmodel organisms. Abstract Animals have evolved to survive, and even thrive, in different environments. Genetic adaptations may have indirectly created phenotypes that also resulted in a longer lifespan. One example of this phenomenon is the preternaturally long-lived naked mole-rat. This strictly subterranean rodent tolerates hypoxia, hypercapnia, and soil-based toxins. Naked mole-rats also exhibit pronounced resistance to cancer and an attenuated decline of many physiological characteristics that often decline as mammals age. Elucidating mechanisms that give rise to their unique phenotypes will lead to better understanding of subterranean ecophysiology and biology of aging. Comparative genomics could be a useful tool in this regard. Since the publication of a naked mole-rat genome assembly in 2011, analyses of genomic and transcriptomic data have enabled a clearer understanding of mole-rat evolutionary history and suggested molecular pathways (e.g., NRF2-signaling activation and DNA damage repair mechanisms) that may explain the extraordinarily longevity and unique health traits of this species. However, careful scrutiny and reanalysis suggest that some identified features result from incorrect or imprecise annotation and assembly of the naked mole-rat genome: in addition, some of these conclusions (e.g., genes involved in cancer resistance and hairlessness) are rejected when the analysis includes additional, more closely related species. We describe how the combination of better study design, improved genomic sequencing techniques, and new bioinformatic and data analytical tools will improve comparative genomics and ultimately bridge the gap between traditional model and nonmodel organisms. ''From elephant to butyric acid bacterium-it is all the same'' (Kluyver and Donker 1926). Or, stated another way, ''anything found to be true of E. coli must also be true of elephants'' (Monod 1997). The use of animal models in biomedical research stems from the concept expounded from these famous adages. Indeed, there is a considerable unity in the genomic, molecular, and biochemical mechanisms across bacteria, fungi, worms, flies, mice, elephants, and humans. Most basic biological principles and genetic regulatory mechanisms are thought to have arisen very early in the evolutionary history, as they are similar in prokaryotes, as well as single-cell and multi-cellular eukaryotes. While there is conservation of key molecular pathways among all organisms, during evolution existing components may be rearranged to acquire novel and/or improved functions. For example, at the protein level, subtle differences in fetal versus adult hemoglobin sequences modulate oxygen affinity. Additionally, at the organ level, the development of a vulva in female roundworms provides a mechanism for oviposition and heterosexual reproduction, and this arose from tissue remodeling processes dependent upon numerous cell-cell interactions and intercellular signaling pathways [see (Sternberg 2005)]. Similarly, multiple solutions may have evolved to address specific needs (e.g., formation of different kinds of eyes from existing structures to perceive images and facilitate vision). It is clear that there are many ways Kaitlyn N. Lewis involving molecular, structural, and/or enzymatic components of achieving a particular function or organ. Such evolutionary tinkering and the concomitant altering or recycling of various components have contributed to integrated adaptations through the process of natural selection (Jacob 1977). It is likely that the tremendous diversity in species lifespan has also arisen as a consequence of evolutionary fiddling when modulation of certain processes may indirectly affect lifespan (Jacob, 1977). While gymnosperms currently hold the longevity record for living organisms [e.g., the tree tumbo Welwitschia mirabilis (1500 years; Misra et al. 2015) and the bristlecone pine Pinus longaeva (5000 years; Brutovská et al. 2013)]; animal lifespans span an approximately 60,000fold range. Mayflies and gastrotrichs live a mere 3 days and the ocean quahog (hard clam) lives [500 years (Carey 2002;Butler et al. 2013). Differences in species maximum lifespan are most apparent when comparing those species living in extreme environments with those living in more favorable, protected, and stable environments; animals living in ephemeral, temporary ponds (e.g., killifish) complete their lifecycle in a matter of days/weeks (Valdesalici and Cellerino 2003), while those living in more stable habitats such as the ocean [e.g., quahog (Butler et al. 2013) and bowhead whale (Craig George and Bockstoce 2008)] can live for centuries, possibly because they are subject to more relaxed evolutionary pressures. Animals living in different niches, particularly those habitats considered harsh, are likely to have evolved distinct mechanisms favoring their survival in that milieu. If not, that particular species would become extinct. Many of these ecophysiological adaptations may also influence species lifespan (Sanchez et al. 2015). Large differences in species maximum lifespan potential [MLSP] must ultimately be genetically encoded; however, if a specific ''lifespan program'' existed, one might expect that genetic revertants of such a program could be identified to enable immortality. To date, no such observation has been made. So while it is highly unlikely that age of death is programmed, genetic regulation of the many pathways that contribute to survival of the individual (e.g., resistance to stress, damage eradication, and/or somatic repair), as well as genetic regulation of the metabolic pathways that inflict age-related damage, is likely to be directly involved in organismal longevity (Gems and Partridge 2013). Observations based on ''natural evolutionary experimentation'' may elucidate mechanisms explaining how some species are able to live healthier and longer lives than others. Comparative biology may also reveal whether or not a mechanism is unique to a species (i.e., a private mechanism) or ubiquitously shared (i.e., a public mechanism) across evolutionarily distinct clades (Martin 1997;Partridge and Gems 2002). Comparative genomics is a relatively new field in which the comparison of genome sequences is used to identify candidate genetic variants associated with particular traits of interest (Alfoldi and Lindblad-Toh 2013). While elements of comparative genomics can potentially help identify genetic factors that contribute to extreme longevity, the lack of high-quality genomes, and the large evolutionary distances amongst species pose difficult challenges to overcome in the search for genetic determinants that modulate aging. Evolution of genes that modulate longevity Comparative genomics is a powerful tool that exploits millions of years of evolution to identify the natural mechanisms that may have led not only to prolonged longevity, but also to different phenotypes associated with disparate resistance to cancer and other diseases. These genomic variations, rooted in evolution, are likely to be well conserved and possibly directly pertinent to human health and lifespan. In keeping with the maxim of the Nobel laureate August Krogh, that for every biological question there is an animal model ideally suited to tackle that research focus (Jorgensen 2001), naturally long-lived species-like the naked mole-rat-are prime candidates for identifying mechanisms involved in both delaying the onset, and slowing the rate, of aging. Aging and longevity research has relied extensively on a battery of commonly used and relatively short-lived eukaryote model organisms, namely yeast, worms, flies, and fish, as well as mice and rats, to explore both genetic and environmental determinants of lifespan. While these short-lived models have each yielded a number of fascinating findings and insights into hypotheses surrounding extended lifespan and healthspan, they may also have constrained this complex, multifactorial field to areas in which they are best suited, most notably short-term intervention studies and genetic manipulations. Studies based upon these organisms revealed that changes in even a single gene (e.g., age-1, phosphatidylinositol 3 kinase) can extend lifespan of Caenorhabditis elegans (Friedman and Johnson 1988). Similar lifespan extension effects are evident in flies and mice when the insulin/IGF, gastric hormone, and the Nrf2/skn-1 detoxification/xenobiotic pathways are genetically manipulated (Kenyon et al. 1993;Brown-Borg et al. 1996;Morris et al. 1996;Clancy et al. 2001;An and Blackwell 2003;Sykiotis and Bohmann 2008;Selman and Withers 2011;Ziv and Hu 2011). Furthermore, various types of dietary restrictions, whether limiting access to calories or amino acids, generally have a conserved effect of enhancing longevity across model systems (McCay et al. 1935;Klass 1977;Weindruch and Walford 1982;Jiang 2000;Selman and Withers 2011;McIsaac et al. 2016), although exceptions do exist (Liao et al. 2010). Collectively, these data support the premise that longevity can be modulated, likely through the regulation of nutrient signaling and stress response, which in turn impacts development, growth, reproduction, and survival. Strikingly, monozygotic human twins, as well as genetically identical individuals of these animal models (e.g., C57BL/6 mice), even when housed in the same environment and fed the same diet do not all have the same lifespans, suggesting that stochastic factors and epigenetic drift influence the hazard rate (i.e., the risk of death as it changes over a lifespan) and subsequent mortality (Finch and Kirkwood 2000;Herndon et al. 2002;Fraga et al. 2005). Collectively, these findings contribute to the many convincing arguments that death and/or aging are neither genetically programmed nor under evolutionary selection pressure (Martin 1997;Partridge and Gems 2002;Kirkwood and Melov 2011). Rather, they are due to co-evolution with other traits. For example, short-and long-lived organisms may exhibit different responses to changes in the environment and thereby indirectly affect survival. Indeed the various nutrient-sensing genes shown to modulate longevity (e.g., insulin/IGF-1, FOXO, mTOR) appear to regulate resource allocation for somatic maintenance and thereby influence survival [reviewed in (Kapahi 2010)]. Mammalian models of aging The mouse is the most widely used mammalian model in biomedical research. Raised in cages in protected vivaria, they are not subjected to typical natural selective pressures such as predation or food limitation. In the wild, we contend that rodents are unlikely to die from age-associated disease linked to genomic instability (e.g., cancer) or a disruption in proteostasis (e.g., proteinopathies), as they do in the laboratory. Rather, their death in the wild more commonly results directly from stochastic and random events (e.g., predation, extreme weather conditions leading to starvation, and/or infections) (Collins and Kays 2014). As such, laboratory mice are more like sedentary humans, and are likely to provide many insights into a first world western lifestyle. The evolutionary theory of aging posits that longevity assurance mechanisms may have evolved in those species that have low extrinsic mortality, such as animals that (a) live underground and are protected from climatic extremes, germs, and predation (e.g., naked mole-rats), (b) can escape hostile habitats (e.g., bats and birds), or (c) have effective body armor (e.g., porcupines and tortoises) (Chen and Maklakov 2012). As such, mice, which experience high extrinsic mortality in the wild and have shorter lifespans than predicted on the basis of body size in captivity, may not necessarily be the best organism with which to search for longevity assurance mechanisms . Moreover, as organisms become less fecund with age, the forces governing natural selection decline and the genetic factors that influence somatic maintenance and organismal survival in the face of stochastic damage likely play a greater role in the determination of survival and longevity of the organism (Hamilton 1966). Greater selection pressures that enhance somatic maintenancethereby extending the time taken before accrued damage can induce a significant decline in function and viabilitymay also extend the lifespan of the animal. It is thus highly likely that long-lived species employ different mechanisms to those of short-lived species to defend their soma and are therefore useful animal models to address the evolved mechanisms involved in retarding the aging process and extending longevity, particularly of other long-lived species, like humans. One such animal model of exceptional biogerontological interest is the naked mole-rat (Austad 2009). Unusual features of the long-lived naked mole-rat The naked mole-rat is only one of over 50 subterranean dwelling rodents found throughout the world (Begall et al. 2007; Table 1). This species belongs to the Ctenohystrica supraorder of rodents, made up of the superfamilies Phiomorpha (African mole-rats, rock rats, and porcupines) and Caviomorpha (tuco tucos, degus, and guinea pigs). Recently, it was concluded that naked mole-rats diverged 31 million years ago [mya] prior to the diversification of other African mole-rat species (the Bathyergidae family), and the naked mole-rat has been now placed in a separate family (Heterocephalidae; Fig. 1) (Faulkes et al. 2004;Patterson and Upham 2014). Among mammals, only the Heterocephalidae (the naked mole-rat) and Bathyergidae (African mole-rats, i.e., the long-lived Damaraland molerat [MLSP 20y; Buffenstein pers. com]) include species that are considered truly eusocial (Bennett and Faulkes 2000); in that, similar to the eusocial insects (e.g., wasps, bees, and ants) there is a well-defined social hierarchy in which breeding is restricted to a single female (''the queen'') within the colony. All mole-rat species (collectively the Bathyergidae, Heterocephalidae, and Spalacidae families) occupy underground niches. Although some of these evolutionarily divergent species lead a solitary existence and others live communally, they share many morphological and physiological traits considered adaptive to life below ground All mole-rats are herbivores, with most species restricting feeding to plant components found solely below ground, namely bulbs and tubers (Bennett and Faulkes 2000). Moreover, all appear to have lower resting metabolic rates than their above-ground-dwelling counterparts and all appear to be extremely tolerant of hypoxia. All mole-rats studied to date are considered long-lived for their body size, living at least twice as long as predicted allometrically. This phenomenon is most pronounced in the naked mole-rat, which has a longevity quotient (ratio of the observed maximal lifespan to that predicted by body mass) African mole-rats from the Bathyergid and Heterocephalid families are generally lumped together. They together with blind mole-rats and zokors from Spalacidae are all subterranean dwelling rodents. Social structure varies considerably within these families, with some species eusocial, social, or solitary. Naked, Damaraland, and blind mole-rats are the most heavily researched with regard to their genome sequencing, and other physiological and biochemical characteristics that may contribute to their long lifespans, and prolonged healthspans a Cryptomys hottentotus is made up of several subspecies. Only C. hottentotus hottentotus is described here, although subspecies share the same haplotype and are of a similar body size b The blind mole-rats (referenced as the superspecies Spalax ehrenbergi here) also include a number of other Spalax species and subspecies c Tachyoryctes splendens includes several subspecies of [4, similar to that observed in humans, another very long-living species (Buffenstein 2005). The naked mole-rat is endemic to the arid and semi-arid regions of north east Africa (Sherman et al. 1991), living in large eusocial family groups in an extensive maze of sealed burrows ranging from 1 to 8 feet below the ground (Jarvis 1981). Having resided in this inhospitable underground milieu in sub-Saharan Africa since the early Miocene (*23 mya), naked mole-rats are extremely tolerant to a variety of conditions most other species cannot survive, including low partial pressures of oxygen and high amounts of carbon dioxide (Larson and Park 2009;Blass 2014). Naked mole-rats appear to be resistant to the highly poisonous allelochemicals (e.g., cyanide/glycosides) found in the plant storage organs they consume. They are also resistant to a wide variety of toxins, including heavy metals, DNA damaging agents, and chemotherapeutics (Salmon et al. 2008;Lewis et al. 2012), as well as to the toxic effects associated with the nitrogenous wastes (ammonia and methane) in their communal latrines (LaVinka and Park 2012). Collectively, these findings indicate a role for enhanced xenobiotic metabolism mechanisms in the long-lived naked mole-rat and other phylogenetically distant mole-rat species. The evolution of defenses against extrinsic mortality factors likely leads to protection against intrinsic factors linked to metabolic toxic moieties as well. In vivaria, the naked mole-rat has a maximum captive lifespan that exceeds 30 years (Edrey et al. 2011a). For the better part of these three decades, naked mole-rats exhibit an extended healthspan and compression of the period of morbidity; they experience very little change in a number of physiological and biochemical characteristics that are typically associated with aging, including a sustained lean mass and well-maintained bone composition (O'Connor et al. 2002;Pinto et al. 2010), cardiac function (Grimes et al. 2012(Grimes et al. , 2014, basal metabolic rate (O'Connor et al. 2002), and proteome maintenance (Perez et al. 2009;Rodriguez et al. 2011). In addition to these distinctive features of slowed and attenuated aging, naked mole-rats are incredibly resistant to spontaneous neoplasia (Seluanov et al. 2009;Liang et al. 2010;Edrey et al. 2011a); over the last four decades of captive housing, reports of cancer are exceedingly rare. We have observed only one occurrence of naturally occurring cancer (a lymphoma in a 21-year-old female) in over 2000 necropsies. Two zoos recently reported rare instances of cancer in their long-maintained Fig. 1 Phylogenetic relationships of rodent species: the Bathyergidae, Heterocephalidae, and Spalacidae families are in two different suborders of Rodentia. Genome data are often compared between naked molerats and mice; however, mice are more closely related to blind mole-rats (Spalax), which are in the Muroidea superfamily. Naked mole-rats are more closely related to guinea pigs (Ctenophiomorpha), both species diverged *39.5 million years ago (mya). Naked molerats and mice diverged from their common ancestor *73.1 mya (and blind molerats from mice *47.4 mya). Additionally, mice appear to be evolving faster than either of the mole-rat species, which could account for many of the differences observed between these unique organisms naked mole-rats (Delaney et al. 2016). Naked mole-rats show pronounced resistance to experimentally induced tumorigenesis (Liang et al. 2010) as well. In sharp contrast, cancer is often observed in C57BL/6 mice; even when cancer is not the direct cause of death, the majority of mice die with some signs of neoplastic lesions (Ikeno et al. 2009). Pronounced cancer vulnerability is thought to contribute significantly to the short lifespan of most laboratory mouse strains, which is approximately half of that predicted on the basis of body size (Hulbert et al. 2007). Whereas in eusocial insects such as bees, the queens have maximum lifespans that are 2-10 fold longer than worker insects (Howell and Usinger 1933;Bozina 1961;Ribbands 1964;Wilson 1971;Seeley 1978;Hölldobler and Wilson 1990), the mole-rat breeding female (i.e., the colony's queen) exhibits a lifespan similar to that of the subordinates in captive colonies (Buffenstein 2008). Although there are very few data on longevity of mole-rats in their native habitat, it has been reported that in the wild, with an army of subordinates to provide food and protection for the queens, the dominant breeders are found in the same burrow system for *17 years, 4-fold longer than wild-living subordinates that may be preyed upon when foraging or may leave their natal colony during dispersal events (Begall et al. 2007). Strikingly, the queen shows no decline in fertility with age and continues to produce offspring throughout her long life (Edrey et al. 2011b), with the ability to produce [1000 offspring during her ''reign'' (Buffenstein 2005). With only one breeding female and 1-4 breeding males in sealed Subterranean rodents share a number of features considered adaptations for life underground. The three mole-rat species (the naked, Damaraland, and blind mole-rat) highlighted here represent the Heterocephalidae, Bathyergidae, and Spalacidae families. All are morphologically streamlined (lack of ear pinnae and cryptorchidism), visually impaired with greater reliance on the somatosensory system. Linked to life in a sealed burrow system where gas exchange is restricted to diffusion through soil, all show reduced metabolic rates, heart rates, and oxygen consumption with concomitant changes in blood oxygen carrying capacity. Not surprisingly, mole-rats are resistant to hypoxia and hypercapnia; neither convective cooling nor evaporative water loss is particularly effective in humid sealed burrows, rather loss of metabolic heat is primarily facilitated by high rates of thermal conductance. Low metabolic rates coupled with high rates of thermal conductance give rise to lower resting body temperatures and less strict regulation of body temperature than observed in species that live above ground. Data obtained from (Bennett and Faulkes 2000;Lacey 2000;Cernuda-Cernuda et al. 2002;Begall et al. 2007) underground habitats, these colonies not only remain relatively isolated from other populations of naked mole-rats, but also exhibit considerable inbreeding (Sherman et al. 1991). Indeed, DNA fingerprinting reveals that naked mole-rats have the highest coefficient of inbreeding (0.45) for any natural populations of mammals on record (Reeve et al. 1990). A small fraction (\1 %) of the mole-rat population belongs to the ''dispersomorph'' caste (animals that occasionally abandon their colony in search of a new colony) and are thought to play a critical role in increasing genetic heterogeneity (O'Riain et al. 1996;Clarke and Faulkes 1997;Faulkes et al. 1997). Genomics and transcriptomics of the naked mole-rat Three studies have generated a significant amount of genomic (Kim et al. 2011;Keane et al. 2014,) and transcriptomic (Yu et al. 2011) data for the naked mole-rat. A handful of additional studies used these data to delve into the more unusual features of naked mole-rats (Kim et al. 2011;Fang et al. 2014;Davies et al. 2015). Here, we will discuss how methods of analysis, study design, and data quality can have a profound impact on effectively revealing mechanisms that might explain unique features of the naked mole-rat. We review these data and report on key findings as well as highlight areas where additional research and genomic analyses are needed. Genome assembly of the naked mole-rat Two groups have independently published (Kim et al. 2011;Keane et al. 2015) draft assemblies of the naked mole-rat genome (Table 3). Both assemblies used shotgun whole-genome sequencing using high coverage Illumina data with a range of insert sizes. This approach is commonly used and has been applied to many species (Zerbino and Birney 2008;Li et al. 2010;Gnerre et al. 2011;Luo et al. 2012). The drawback of this approach, however, is that the assemblies are relatively fragmented and contain a high percentage (15 %) of unfilled gaps, hindering analyses of gene regulation and expression. Early short-read Illumina assemblies were shown to contain a significant number of misassemblies and may collapse homologous genes and/or pseudogenes (Zimin et al. 2012). With improvement of sequencing platforms and assemblies (e.g., long-read technologies for whole transcript sequencing (Sharon et al. 2013;Tilgner et al. 2014), as well as the ongoing assembly of many more genomes, the quality of data will continue to improve. Genomes of comparable species like mouse, rat, and guinea pig (Table 3), on the other hand, have been assembled by several research groups using multiple types of sequencing data (e.g., Illumina shotgun sequencing, Sanger sequencing, bacterial artificial chromosomes, etc.). Because of the inclusion of more types of data, these assemblies are significantly less fragmented and more complete than those of the naked mole-rat. In addition, these assemblies are constantly being updated using new technologies (Lander et al. 2001;Venter et al. 2001;Cheung et al. 2003;Consortium 2004;She et al. 2004;Chaisson et al. 2015). Comparing genomes with different levels of completeness presents a fundamental challenge, limiting both the data interpretations and the conclusions drawn. For example, since better resolution of repetitive elements is expected when using longer reads, the observation of a lower abundance of repetitive elements in the naked mole-rat genome (25 vs. *35 % in other murid genomes) could be an artifact of the limitations imposed by the different sequencing and assembly technologies used. Thus, researchers should exercise caution when directly comparing genomes of the naked mole-rat to other species for some of the differences observed may be due to technical artifacts rather than evolutionary changes. Note that the naked mole-rat genome assemblies are less contiguous (lower contig N50) and more gapped (higher gap percentage) Estimates of gene content Approximately, 22,000 genes were identified in the original sequencing (hetGla1) of the naked mole-rat genome (Kim et al. 2011;Bens et al. 2016). This number is similar to that reported in other mammalian genomes. Analysis of gene homology revealed that *17,000 naked mole-rat genes have a direct ortholog in either human, mouse, or rat. Moreover, when comparing annotation data (Kim et al. 2011), the naked mole-rat genome appears to exhibit 93 % synteny to that of the human genome, with strikingly less synteny to that of other rodents (83 % to mouse and 80 % to rat). These differences most likely reflect structural rearrangements when compared to the murid common ancestor. Syntenic comparisons to mammals identified 750 gained genes and 320 lost genes in the naked mole-rat (Kim et al. 2011). Further analysis of these data found 66 additional genes present in naked mole-rat that are absent in 11 other mammals. Since these genes have no known homologs, their function is currently unknown. The second sequencing and gene annotation (hetGla2) effort (Keane et al. 2014) identified *42,000 coding sequences, of which *13,000 had a best reciprocal BLAST hit to guinea pig, mouse, or human. Several thousand other coding sequences exhibited high-quality one-way alignments. Notably, current annotations of the naked mole-rat genome were generated bioinformatically using sequence homology to well-curated genomes (e.g., mouse and human). Yet, approximately 50 % of the RNAseq reads from the naked mole-rat align to the unannotated parts of its genome [ Fig. 2; data from Kim et al. (2011)], suggesting that a significant fraction of the transcriptome is escaping gene annotation. This limits transcriptome comparisons only to the recovered orthologs, preventing the identification of novel naked mole-rat genes. As a potential step forward, a transcriptome assembly based framework (FRAMA) was created that annotated genes from high coverage RNAseq data collected in multiple tissues (Bens et al. 2016), and this seems to be a promising direction in making subsequent analyses more reliable. Using human gene annotations from the highly curated human genome as a point of reference, FRAMA identified *17,000 corresponding naked mole-rat genes, indicating that roughly 88 % of human genes have a naked mole-rat ortholog. This suggests that despite a *90 mya of divergence, such an approach could perform well for ortholog inference (Bens et al. 2016). Nevertheless, currently, the best genome annotation algorithms can create incorrect annotations of a genomic sequence even in well-studied genomes, such as humans and mice; this is obviously exacerbated in less well-studied, more exotic species. For instance, miscalled exons in the annotation can cause frameshifts, leading to erroneously truncated genes (Zhang et al. 2012). Such may be the case with UCP1 in the cetaceans. It was previously reported that whales (Minke, fin, bowhead, and sperm) all had a premature stop codon in the C-terminal region when compared to terrestrial mammals (Keane et al. 2015). As with the naked mole-rat, changes in UCP1 in these cetaceans potentially contribute to altered mass-specific metabolic rates and thermogenic function, and this change can be inferred to be a ''longevity assurance mechanism'' that contributes to increased lifespan of these organisms (Keane et al. 2015). However, the most recent data available on NCBI for multiple cetaceans, including dolphins and whales, as well as naked mole-rats, cows, humans, and mice, show that all of these, with the exception of the bowhead whale, have full lengths coding sequences for the UCP1 protein (Fig. 3a). However, this truncated amino acid sequence may result from the loss of one nucleotide in the bowhead whale UCP1 sequence (Fig. 3b). The stopcodon in the genomic sequence may however be a sequencing error. For example, if a ''T'' was missed in the Illumina data but hypothetically reinserted bioinformatically, the correct translation for a full length UCP1 gene can occur (Fig. 3c) Fig. 2 Alignment rates of naked mole-rat RNA-sequence data to its transcriptomes and genome. Paired-end RNAseq data measured from a mixed pool of 7 naked mole-rat tissues (Kim et al. 2011; SRA: SRS213856) were aligned using Bowtie2 (Ben Langmead and Salzberg 2012) to the naked mole-rat transcriptome derived from genscan annotations (Burge and Karlin 1998), naked mole-rat transcriptome derived from NCBI annotations (Keane et al. 2014), and the entire naked mole-rat genome (hetGla_female_1; Keane et al. 2014). Recent NCBI annotations produce the highest fraction of transcriptome alignments; however, *40-50 % of RNAseq reads align to the genome, but not the annotated transcriptome. Similar distributions of alignment rates were observed with other alignment methods (not shown) homopolymeric regions (Schirmer et al. 2016). More work is needed to determine if this is indeed an error or a real species difference. As shown in Fig. 3, this gene is well conserved among all the mammals examined beyond the region of the putative stop codon, suggesting that this is not a pseudogene. The presence of the stop codon must be confirmed by additional orthologous genomic methods with different error profiles. This is a cautionary tale and illustrates the necessity of both improved manual curation and whole-genome sequences. Rapidly and slowly evolving genes Interspecies differences in mutation rates are generally thought to indicate different evolutionary pressures, while loss of function and frame-shift mutations are indicative of the loss of selection pressure on the genome to maintain functional protein forms. Sequences with greater accumulation of nonsynonymous differences (K a ) relative to synonymous differences (K s ) may highlight key pathways that are under positive selection pressure (Wagner 2002). Thus, K a /K s is a convenient marker of selective pressure. Homologous genes with a K a /K s ratio greater than 1 are considered to be under positive selection pressure. Previous studies analyzing orthologous genes of terrestrial and subterranean species revealed that the nucleotide substitution rate of coding sequences was markedly lower in those species that lived below ground (Du et al. 2015;Shao et al. 2015). In general, the rates of both synonymous and nonsynonymous differences have slowed down in the naked mole-rat relative to both the mouse and guinea pig (Du et al. 2015). This apparent decrease in evolutionary rates of change may be associated with life in a stable and protected environment and may be due to a longer generation time. In addition, the naked mole-rat's Ka/Ks ratio is higher than that of the mouse and can be attributed to the lack of purifying selection due to a smaller population size (Kim et al. 2011). Intriguingly, the naked mole-rat genes showed greater similarity with humans than with mice (Fig. 4). Note, however, despite a high level of synteny overall, only *60 % of the annotated naked mole-rat genes have a good one-to-one ortholog to both human and mouse (Keane et al. 2014), suggesting that this comparison is somewhat biased (Fig. 4). We performed full Smith-Waterman alignment (Smith and Waterman 1981;Zhao et al. 2013) of the naked molerat transcriptome (derived from genscan annotations) to those of the mouse [mm10 UCSC annotations] and the human [hg19 UCSC annotations]. This allows for a more forgiving alignment and is generally considered the gold standard. Despite the limitations of these analyses, the naked mole-rat transcriptome showed double the alignment rates and greater parallels with the human transcriptome than that of mouse. The first study on the naked mole-rat transcriptome was completed prior to the publication of the genome and described only the genes that were upregulated in the naked mole-rat when compared to the C57BL/6 mouse for genes that are expressed at a lower level may simply reflect poor homology with the mouse genome (Yu et al. 2011). Surprising similarities are evident among the transcriptomes of subterranean rodents Spalax galili, Heterocephalus glaber, and Fukomys damarensis (Bennett and Faulkes 2000;Begall et al. 2007;Davies et al. 2015). Several hundred genes are under positive selection in all three of these phylogenetically distinct subterranean rodent species, many of which are likely to reflect shared adaptations to a subterranean lifestyle (i.e., life under hypoxic conditions, resistance to cytotoxins, and cancer). On the other hand, multiple pathways associated with proteostasis, genomic stability, and cell cycling are more similar between the naked mole-rat, other nonmuroid rodents (including above-ground dwelling rodents, i.e., guinea pigs, chinchillas, squirrels, and jerboas) and humans than among laboratory mice and rats and these species. These pathways likely underwent accelerated evolutionary change in both rats and mice and much slower divergence in the nonmuroid rodents and humans (Vinogradov 2015). This divergence of the Muridae (rats and mice) from the ancestral lineages is thought to reflect a relaxation in purifying selection of these pathways essential for both genomic and proteomic stability (Vinogradov 2015). This may explain the greater propensity for cancer and a shorter than expected lifespan predicted on the basis of their body size for muroid rodents (Hulbert et al. 2007). Despite the caveats outlined above, comparing the longlived naked mole-rat to its similarly sized, shorter-lived mouse counterpart reveals a number of phenotypic differences between species (Table 4) that appear to be likely candidates pertinent to both aging studies and key age-associated diseases. These interspecific differences are linked to oxidative stress response, genomic maintenance, and proteostasis. Genomic and transcriptomic comparisons of specific biochemical pathways Oxidative damage Transcriptome analyses revealed that mitochondrial, oxidoreduction, and fatty acid metabolism pathways are upregulated in naked mole-rats with respect to mice (Yu et al. 2011). Paradoxically, tissues from captive naked molerats have high levels of oxidative damage to lipids, proteins, and DNA from an early age (Andziak and Buffenstein 2006;Buffenstein et al. 2008). Not Fig. 4 Divergence between naked mole-rat, mouse, and human protein coding sequence. Outliers from the diagonal represent more than expected accumulation of amino acid changes. Naked mole-rat proteins that show greater similarity to those of humans than to those of mice (as indicated by an increase in Ka/ Ks ratio relative to mouse) are colored in orange and lie above the line of identity surprisingly, these damage levels are paired with a lackluster antioxidant defense. Naked mole-rats have similar levels of key antioxidant enzymes to those of mice, with the exception of cytosolic glutathione peroxidase [cGPX], which is 70-fold lower in naked mole-rats (Andziak et al. 2005;Lewis et al. 2013). Transcriptomic studies confirm that both Gpx1 and Gpx4 are lower in naked mole-rats compared to mice (Yu et al. 2011). Additionally, two of the six peroxiredoxins (Prdx2 and Prdx5) are expressed at lower levels than observed in mice both at the transcript and protein level (Fang et al. 2014). Despite the low levels of antioxidant expression, naked mole-rats (and other long-lived rodents including the blind mole-rat) do have elevated levels of other cytoprotective pathways (e.g., NRF2, molecular chaperones) compared to the mouse (Edrey et al. 2014;Rodriguez et al. 2014;Lewis et al. 2015). Genome maintenance and cancer resistance A global comparison of previously known human cancer-associated oncogenes and tumor suppressors in the naked molerat genome did not reveal any striking difference in copy number variation (MacRae et al. 2015). Rather, a lower frequency of mutations has been observed in a subset of 518 genes linked to genome maintenance in naked mole-rats, mice, and humans (MacRae et al. 2015), which may suggest that positive selection is acting on this gene group. Consistently, many of the enzymes directly pertinent to regulating DNA repair are expressed at higher levels in liver, brain, and testes than observed in mice (Kim et al. 2011;MacRae et al. 2015), including enzymes involved in tumor suppression (e.g., TP53), base excision repair, mismatch repair, and nonhomologous end-joining (MacRae et al. 2015). It has been thought that increased levels of p16 were associated with the ''contact inhibition phenomenon'' observed in naked mole-rat primary cultures of dermal fibroblasts in some laboratories (Seluanov et al. 2009). Further analysis of the INK4 locus revealed unique splicing patterns that encodes an additional protein, pALT INK4a/b , which may contribute to cell cycle regulation and cancer resistance in naked mole-rats (Tian et al. 2015). Indepth genome analysis revealed that the p16INK4a may be altered in structure, producing a smaller protein with earlier stop codons (14-kDa) (Kim et al. 2011), that may also impact cell cycle progression and tumorigenesis. Intriguingly, the distantly related, but nonetheless cancer-resistant blind mole-rat (Spalax ehrenbergi; Table 1) has purportedly evolved different anticancer mechanisms to that of the naked mole-rat . p53 in Spalax differs from that of most mammals with a change in amino acid sequence akin to a specific mutation frequently found in tumors (Shams et al. 2013). This subtle difference in p53 protein is not without major consequence; it alters the ability of this species to induce apoptosis and enhances the immune-inflammatory processes promoting interferon B1-induced necrosis (Shams et al. 2013). In contrast, the naked mole-rat p53 does not show this potentially beneficial mutation (Gorbunova et al. 2014), but is more similar to that of humans than that of mice and rats with similar proline-rich domains that reportedly have been subject to positive selection (Keane et al. 2014). Genome maintenance (DNA repair) : Cancer incidence ; Telomere length ; Telomerase ; Tolerance of hypoxia and hypercapnia : mTOR signaling ; Proteome maintenance : Autophagy : Proteasome activity : Naked mole-rats are extraordinarily long-lived compared to the similarly sized mouse. Many previous studies have started to characterize aging-related phenotypes in the naked mole-rat. Compared to shorterlived mice (i.e., C57Bl/6), naked mole-rats are cancer resistant and tolerant of exogenous stressors including hypoxia and hypercapnia. They also have elevated proteome and genome maintenance, autophagy, and proteasome activity levels compared to mice. Strikingly, they have high levels of oxidative damage even from a young age compared to mice, and both species have similar levels of antioxidant enzymes (i.e., SOD). Despite this, naked mole-rats have high constitutive levels of cytoprotective NRF2signaling activity. This may be one critical pathway that contributes to their lengthened healthspan and lifespan Telomeres and telomerase A number of genes in the naked mole-rat genome have undergone positive selection, including genes involved in the function and regulation of telomerase (Tep1 and Terf1) (Yu et al. 2011). Telomere shortening is thought to play a pivotal role in aging and as they reach critically short length, cells enter a senescent state. Studies evaluating telomere length and telomerase activity have produced equivocal and contradictory findings [ (Seluanov et al. 2007;Gomes et al. 2011); Yang, Hornsby and Buffenstein pers. com]. Gomes et al. 2011 reported the telomeres of naked mole-rats to be much smaller (one-third to one half) than those of lab mice and rats, and of similar length to those in humans (Gomes et al. 2011), with telomerase activity in cultured naked mole-rat dermal fibroblasts just one-third of what is observed in mouse fibroblasts (Gomes et al. 2011). The naked mole-rat telomerase genes have unique polymorphisms and promoter structure (compared to guinea pigs and humans) (Evfratov et al. 2014). Tep1, Terf1, and other genes that regulate telomerase activity likely also contribute to the slow-aging and cancer-resistant phenotype of the naked mole-rat (Yu et al. 2011). Tolerance of hypoxia, hypercapnia, ammonia, and pain Naked mole-rats (as well as other subterranean mole-rat species) evolved to live successfully in a sealed maze of hypoxic underground tunnels where gas exchange through soil is poor. Transcriptomic differences relative to mice and rats are evident in all three subterranean species of mole-rats even under normoxia (Fang et al. 2014). For example, hemoglobin a (Hba1 and Hba2) and neuroglobin (Ngb) are more highly expressed in the mole-rats (Avivi et al. 2010;Fang et al. 2014). Both naked mole-rat and guinea pig hemoglobin a share a unique amino acid change (Pro44His) thought to convey better tolerance of hypoxia in the low-oxygen atmospheres encountered underground and at high altitudes, respectively (Fang et al. 2014). Hypercapnia is deadly to most mammals and generally evokes considerable pain; increased levels of CO 2 turns air acidic, stimulating pain receptors and the burning sensation in nasal passages and eyes (Brand et al. 2010). The lack of pain response in naked mole-rats is attributed to motif changes in the Na(V)1.7 sodium channel (Scn9a), a feature shared with Damaraland and blind mole-rats, as well as other species that are chronically subjected to similar atmospheric conditions (Fang et al. 2014), as well as the lack of expression in substance P and calcitonin gene-related gene peptide (Cgrp) (Park et al. 2008;Park and Buffenstein 2012). This difference in the sequence, and most likely subsequent negative regulation, could account for their indifference to chemical induced pain (Kim et al. 2011;Park and Buffenstein 2012). Insulin and mTOR signaling The insulin and mTOR pathways are considered key players in mouse and human aging, with downregulation in these pathways and/or altered signaling through mutations of receptors linked to increased longevity (Selman et al. 2008;Selman and Withers 2011;Lamming and Sabatini 2011;Johnson et al. 2013;Lamming et al. 2013;Mulvey et al. 2014). Transcriptomic analyses revealed that the naked mole-rat showed a divergent sequence of the insulin b-chain, similar to that of the guinea pig and other close relatives, i.e., hystricognath rodents (Opazo et al. 2005). Moreover, RNAseq using both liver and brain tissue taken from nonfasted, non stressed animals show that many components of the insulin and mTOR pathways are downregulated and might be indicative of slower growth rates (Kim et al. 2011). Both insulin and insulin receptor gene expression are attenuated, in addition to insulin receptor substrate 1 (Irs1). Furthermore, Pi3 k isoforms are also downregulated (with the exception of Pik3cb) (Kim et al. 2011), as well as Igf1 and Igf1 receptor (Igf1r) (Kim et al. 2011). Alternatively, insulin like growth factor 2 (Igf2) transcript levels are high (Kim et al. 2011). Igf2 has high homology with insulin and is commonly expressed at high levels in utero with levels dropping dramatically after birth (Lui and Baron 2013). Igf2, and its binding protein Igf2bp2, are retained at high levels in adult naked mole-rats (Kim et al. 2011), thereby maintaining a neonatal like mode of glucose handling and most likely remaining highly sensitive to different nutrient signals. The Igfbp protease, pregnancy-associated plasma protein-A (Papp-a), known to modulate mouse lifespan, reportedly has a different sequence in the naked mole-rat to that of mice and is also constitutively expressed at low levels, giving rise to abnormal glucose tolerance tests (Kramer and Buffenstein 2004;Brohus et al. 2015) and raising the possibility that Papp-a may play a role in naked mole-rat longevity. Proteome maintenance The observed decline in nutrient-sensing gene expression described above may have a profound effect on protein translation and turnover. The proteostasis system of the naked mole-rat is significantly more robust than that of mice (Pride et al. 2015). Naked mole-rats exhibit greater translational fidelity apparently without a reduction in translation rate ; their proteins also appear to be more resistant to oxidation, heat, and urea (Perez et al. 2009). This greater structural stability is maintained with age (Perez et al. 2009). Autophagy is also elevated and sustained during aging in the naked mole-rat (Rodriguez et al. 2011;Pride et al. 2015) and doublemembrane autophagosomes have been observed in higher numbers in multiple tissues of naked mole-rats (Zhao et al. 2014) compared to mice. Proteasome activity in multiple tissues was found to be significantly higher in naked molerats than in shorter-lived mice (Perez et al. 2009;Rodriguez et al. 2012;Edrey et al. 2014) and also exceptionally resistant to inhibition by competitive inhibitors MG-132 and bortezomib and oxidative stress (Rodriguez et al. 2014). Interestingly, it appears that the African and Middle Eastern mole-rats have higher levels of proteasome activity than above-ground-dwelling rodents, and that the level of chymotrypsin and caspase-like activity in muscle tissues significantly correlate well with maximum lifespan (Rodriguez et al. 2016). Challenges of comparative genomics Multiple mammalian genomes have been sequenced and assembled in the recent decades. However, the information that would enable robust comparative genomic analyses is still sparse. As of today, only 19 rodent genomes have been assembled with varying levels of quality, and most sequenced species are only distantly related to the naked mole-rat. The ideal comparative genomic study would include a large number of species as well as multiple individuals within each species to account for intraspecific genetic variation. This limitation in numbers of genomes and their disparate quality makes linking variation in sequence to a specific phenotype more of an anecdotal, rather than data-driven, task. Comparative genomic studies assessing interspecific differences in genomic data sets present numerous problems. Of critical importance is the choice of the reference genome, with the trade-off being whether to choose a closely related species that is not well annotated or that of a more distantly related species that has been extensively studied and is well annotated. For example, while mice and humans have been extremely well studied, they diverged from mole-rats *70 and *90 mya respectively (Kim et al. 2011). The genomes of the more closely related guinea pig (divergence of *39.5 mya) and the Damaraland mole-rat (divergence of *31.2 mya) have also been assembled, albeit with lower quality and poorer annotation than that of mice and humans. Possibly because of extensive deep sequence analyses of the human genome, the naked molerat genome shows the best homology to that of humans, rather than with other rodents. In species with poor coverage or poorly annotated genomes, it is particularly difficult to identify genes that C. elegans Knockdown of rpn-1, rpn-3, rpn-6, rpn-7, rpn-8, rpn-9, rpn-11, rpt-1, rpt-4, rpt-5, rpt-6, pas-5, pas-6, pbs-2, pbs-3, pbs-4, pbs-5, or pbs-7 Decreased lifespan Ghazi et al. Perturbations to genes related to the proteasome, as well as the proteasome itself, result in changes to lifespan and healthspan of a variety of organisms, including yeasts, worms, flies, mice, and even humans. The long-lived naked mole-rat also has elevated proteasome activity, but we have little information about the genes involved. Thus, by interrogating the genome specifically for proteasome-related genes, we can study these genes more in depth and compare with other species, to identify beneficial (or detrimental) mutations or polymorphisms. Differences in gene/ protein sequence can then be studied in vitro or in vivo to identify causal variants. [References in Table: Torres are truly under-expressed, orthologs, and/or splice variants. Data analyses are thus limited to those genes with unambiguous annotation. Focusing comparisons on more closely related species could control for much of the confounding genetic diversity. For example, the recent paper comparing transcriptomes of nine African mole-rat species is probably the best comparative genomic study using the naked mole-rat to date. This genome-wide screen suggested that genes related to tumor suppression, telomere regulation, cell division, DNA repair, and stress response were under positive selection in the African mole-rat clade . This provides further mechanistic insight into what may contribute to their notable cancer and stress resistance, and whether these gene expression patterns are unique to the naked mole-rat or shared among closely related species. Given the large evolutionary distance of the genomes usually compared, finding the causal variation for a phenotype has to be hypothesis driven and requires careful scrutiny and interpretation, for observed species differences likely reflect more about their evolutionary history, ecophysiological traits, or divergent phylogeny. This would certainly be the case if one compared the genomes of the long-lived Bowhead whale directly to the long-lived naked Fig. 5 A summary of future project directions. Current comparative genomic studies usually compare 1 long-lived and 1 short-lived species (i.e., the naked mole-rat vs. the mouse) and result in a large number of genes that may or may not involved in healthspan and/or longevity that have not been experimentally validated. We propose that comparing the genomes of a large number of both long-and short-lived species and focusing on specific phenotypes that may contribute to the extended healthspan and lifespan would yield more focused and meaningful genome data. These results would then be confirmed through hypothesis-driven experimental validation to determine further impact on longevity mole-rat. While it is tempting to force hypotheses to fit a priori predictions, this approach often may give rise to spurious ''just-so stories.'' Two clear examples were recently highlighted: In the original naked mole-rat genome paper (Kim et al. 2011) the ''hairless'' phenotype of naked mole-rats was attributed to a substitution of conserved amino acid in the identified hair growth associated protein (HR). This interpretation was based on findings that similar mutations in this particular codon cause hair loss in mice, rats, and humans. However, two other hystricognath rodents, namely the Damaraland mole-rat and guinea pig, share this mutation in the HR gene, yet have hairy coats (Delsuc and Tilak 2015). It therefore becomes clear that differences in the mole-rat and mouse/human HR gene simply reflect a phylogenetic divergence from mice and men (Delsuc and Tilak 2015;Davies et al. 2015). The conclusion concerning hyaluronan synthase 2 (Has2) is another such example. Differences in Has2 have been used as a causal explanation for the extraordinarily low incidence of cancer in naked mole-rats (Seluanov et al. 2009;Tian et al. 2013). Unique amino acid residues in this enzyme reportedly result in the synthesis of a higher molecular weight hyaluronan in naked mole-rats than in mice (Tian et al. 2013). While the Has2 sequence is unique to the naked mole-rat, extending comparisons to a wide range of other species showed that some of the proposed mutations are shared with several species, including guinea pigs. These mutations do not always result in cancer resistance , although their specific functional ramifications are unknown. Interestingly, high molecular mass hyaluronan is also reportedly expressed in the blind mole-rat (Spalax galili) (Tian et al. 2013) despite the fact it does not share any of the significant mutations observed in the naked mole-rat yet reportedly is resistant to cancer ). Correlation of phenotype and genotype As discussed above, one of the confounding factors for the comparative genomics analyses is poor definition of the phenotype to be explained. Longevity itself is not a welldefined phenotype, but a byproduct of a myriad of beneficial phenotypes and/or the absence of detrimental phenotypes including cancer resistance and better cellular and proteome maintenance. Genomics, overall, is still a descriptive science when comparing one entire genome to another the data will be overwhelming and less likely to identify or target specific mechanisms. We suggest that defining the phenotypes conserved between long-lived species (i.e., diverged from short-lived species) could be a major step forward. For instance, when the phenotype is well defined, as in the case of different rRNA processings in naked mole-rats that results in a different structure of the 28S ribosomal subunit Fang et al. 2014), the comparisons are limited to a subset of genes and suggest meaningful genotype-phenotype connections. In another example, data across many species (including worms and flies) indicate that proteasome function is critical to lifespan, and may also promote extended healthspan and longevity-related phenotypes (Ghazi et al. 2007;Tonoki et al. 2009;Kruegel et al. 2011). Previous data have also shown that increased proteasome activity is observed in naturally long-lived species (Pickering et al. 2015), including the naked mole-rat (Rodriguez et al. 2012(Rodriguez et al. , 2016, although we know nothing about the genetic mechanisms behind these phenotypes. By interrogating the genomic data for specific, proteasome-related genes (Table 5), we can get a more in-depth picture of not only those specific gene sequences, but also how small differences may impact this longevity-related phenotype, and experimentally test causality of these genetic differences, using tools like CRISPR, in vitro and/or in vivo (Fig. 5). Another approach to better define the phenotype of increased lifespan includes measuring the transcriptomic and proteomic responses of long-lived and short-lived species to a similar battery of perturbations. For instance, to better define the phenotype of cancer resistance of the naked mole-rat, one could examine differences in the transcriptional response to cancer-causing agents between the cancer-resistant and cancer-prone species. This approach will enable identification of genes, pathways, or regulatory molecules that are differentially regulated between long-lived and short-lived species, focusing the comparative genomic query to find the variation that is causal for this change. Finally, genetic differences that can generate significant phenotypic variation between even closely related species are postulated to be a result of regulatory, and not coding, sequence changes [reviewed in Varki and Altheide (2005)]. Better annotation of regulatory sequences in the species of interest (e.g., chromatin profiling) could make linking the genetic variation to phenotypic difference easier. In summary, technological improvements will provide us with genomic data of far greater resolution than what is available today. Similarly, more carefully controlled comparative studies using state-of-the-art bioinformatics tools will yield high-quality, unambiguous data. As a direct result of these improvements, greater insights into the unique traits of naked mole-rat will become possible, particularly when combined with phenotypic profiling. Improving access to the secrets within the naked mole-rat genome will elucidate the mechanisms naked mole-rats employ to resist the vagaries of aging and prevent ageassociated diseases from gaining hold.
v3-fos-license
2018-04-03T00:16:09.807Z
2012-08-01T00:00:00.000
9923917
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.12816/0003147", "pdf_hash": "612f80a359302de2741608e8c49ba7e070bd8a80", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1325", "s2fieldsofstudy": [ "Medicine" ], "sha1": "612f80a359302de2741608e8c49ba7e070bd8a80", "year": 2012 }
pes2o/s2orc
Effectiveness of Hemcon Dental Dressing versus Conventional Method of Haemostasis in 40 Patients on Oral Antiplatelet Drugs Objectives: The purpose of the study was to evaluate the effectiveness of the HemCon Dental Dressing (HDD) in controlling post extraction bleeding and to ascertain its role in healing of extraction wounds, as compared to control. Methods: The 40 participants in the study were all receiving oral antiplatelet therapy (OAT). A total of 80 extractions were conducted without altering the patients’ drug therapy. The extraction sites were divided into 2 groups: one group received a HDD, and the control group where the conventional method of pressure pack with sterile gauze under biting pressure (followed by suturing if required) was used to achieve haemostasis. Results: All HemCon treated sites achieved haemostasis sooner (mean = 53 seconds) than the control sites (mean = 918 seconds) which was statistically significant (P <0.001). Postoperative pain in the HDD group (1.74) was also significantly lower than in the control group (5.26) (P <0.001). Approximately 72.5% of HDD-treated sites showed significantly better postoperative healing when compared to the control site (P <0.001). Conclusion: HDD proved to be an excellent haemostatic agent that significantly shortened the bleeding time following dental extraction in patients on OAT. Additionally, HDD offered significantly improved post-operative healing of the extraction socket and less postoperative pain. M edical problems are one of the roadblocks for oral and maxillofacial surgeons; they are expected to manage patients with such ailments with utmost care. 1 Excessive bleeding in patients on oral antiplatelet therapy (OAT) is currently one of the most commonly encountered complications in dentistry, making OAT patients some of the most challenging patients to treat. 2 Although the prevalence of bleeding disorders in patients being treated by dentists and oral surgeons appears to be small, a study in 1994 showed that 2.3% of 1500 adults seeking dental treatment were on OAT and, indeed, bleeding disorders were more prevalent than cancer, renal disease, or joint replacement. 35][6][7][8][9] A common approach to managing such patients is suspension of the OAT for 3 to 4 days before surgery, which exposes the patients to a higher risk of thromboembolism, myocardial infarction, and cardiovascular accidents. 10,11Intraoperative or postoperative bleeding is not significantly reduced by this regimen.3][14] Many styptics such as the gelatin sponge, fibrin glue, and tranexamic acid have been used in the past to control intra-and postoperative haemorrhage. The HemCon Dental Dressing (HDD) (HemCon Medical Technologies, Portland, Oregon, USA) is an USA Food and Drug Administration (FDA) approved material which has been used extensively under the name HemCon Bandage to stop bleeding in combat wounds and other severe trauma. 15,16The HDD is a new generation medical device which offers self-adhesion and provides a protective layer that can be custom cut according to a patient's need.Although haemostasis proceeds rapidly, there is no heat generated that might cause thermal injury to the wound site. 17HDD is chitin, which is manufactured from freeze dried shrimp shells.Chitin is an insoluble polysaccharide polymer of glucosamine that is purified and partially deacetylated to form soluble chitosan aqueous gel. 18,19Chitosan gel is then freeze dried in moulds to make a highly electropositive sponge-like material that is haemostatic and adapts well to oral surgical wounds.Chitosan is a food grade material which can be safely ingested, and its accidental inhalation risk is almost zero as it undergoes dissolution. Chitosan has a positive charge and attracts red blood cells (RBC) and platelets, which are negatively charged through ionic interaction; thus, a strong seal is formed at the wound site. 16This supportive, primary seal allows the body to activate its coagulation pathway effectively, initially forming organised platelets.HDDs are designed to maintain this seal and serve as a frontline support structure as the platelets and RBC continue to aggregate until haemostasis is achieved.HDDs do not rely solely on the clotting cascade to maintain haemostasis. 20The strong sealing action allows the body to form a clot naturally.HDD can also be used in haemophiliac patients as clot formation is based on electrostatic charge attraction instead of the normal quantities and functioning of clotting factors.In addition to providing haemostasis, HDD also offers an antibacterial barrier. The purpose of this study was to evaluate the effectiveness of HDD in controlling post-extraction bleeding and its role in the healing of extraction wounds as compared to more conventional methods such as pressure gauze, followed by suturing if required.For our study, we hypothesised that, in OAT patients, HDD would yield better outcomes as compared to more conventional measures used after tooth extractions. Methods A total of 40 adult OAT patients undergoing extractions were chosen randomly for this study after institutional review board approval was -The study results showed earlier haemostasis, less discomfort, and better healing without changing a patient's OAT regimen when undergoing minor surgical procedures using HDD. -The study results also support early treatment without repeated evaluations of international normalised ratios in patients receiving OAT regimens. obtained.All patients underwent extractions without any alteration to their antiplatelet medication regimens.Included in the study were OAT patients undergoing multiple tooth extractions who were between the ages of 35 and 75 years, and had international normalised ratio (INR) values ≤3 (1-3).Diabetic patients with wellcontrolled sugar levels were also included.Patients undergoing a single tooth extraction or multiple extractions limited to one quadrant; those with an allergy to seafood; those indicating that they were smokers, and those with genetic bleeding disorders were excluded from the study. All patients underwent various investigations preoperatively such as haemoglobin estimation (Hb), bleeding time (BT), clotting time (CT), INR, and platelet count (PC).BT measures the primary phase of haemostasis-the interaction of the blood vessel wall and the formation of a haemostatic plug.CT indicates the time interval from the formation of a platelet plug to the completion of vasoconstriction and clot formation.Similar and identical extraction sites were selected within each patient (for example, extraction of the first molars in the right and left quadrants of the lower jaw).A split-mouth study design was used.By this method, a patient would receive HDD on one side of the mouth (study site).On the other side (control side), the conventional method of pressure packing with a sterile piece of gauze under biting pressure, followed by suturing, if required, was used to achieve haemostasis after extraction.To reduce study variability, similar contralateral or counterpart teeth were extracted wherever possible (32 patients).In the other 8 patients, similar sized teeth were selected (i.e.single rooted tooth for single rooted tooth and molar for molar).Neither the surgeon nor the patients could be blinded to the use of HDD versus the control method; however, every second patient's left side/ upper arch was used as a study site.After obtaining consent, each patient underwent atraumatic simple extractions under local anaesthesia using lignocaine with adrenaline (1:80,000).The procedure was completed on a single visit by a single surgeon.Surgical sites were randomly selected for treatment either by a complete or custom-cut HDD in one quadrant and a control consisting of biting pressure on a sterile cotton gauze dressing followed by suturing if required in the other quadrant in each patient.Custom-cut HDDs were used to fit loosely into extraction sockets that were smaller than the size of a complete HDD (10 mm x 12 mm x 5.5 mm).A HDD was placed into the extraction socket at the height of the crestal bone wherever possible.Direct finger pressure was placed over the extraction site for 40-60 seconds after placement of the HDD.Sutures were placed whenever haemostasis was not achieved under biting pressure over the sterile gauze piece after 900 seconds (15 minutes).Timing to haemostasis was noted for both the HDD and control surgical sites using a stopwatch.All patients were prescribed a diclomol tablet (diclofenac sodium 50 mg + paracetamol 500 mg) every 8 hours for 3 days.All patients were reviewed by another surgeon on the 7 th postoperative day for assessment of pain and healing.Relative pain scores were assessed 1 week postoperatively.Self-reported pain scores on a scale of 0-10 were taken into account to estimate postoperative pain.Healing was assessed on the basis of epithelization using the visual analogue scale, the presence of liver clots, pus discharge, dry socket, and the extent of the sinus opening.Healing was compared between the study and control sites and was assessed on a scale of 1-3 with 1 representing the healing of the study site being significantly worse than the control; 2 representing the healing of the study site being the same as the control, and 3 meaning the healing of the study site was significantly better than the control. Secondary bleeding was assessed through patients' self-reporting and/or by the presence or absence of liver clots.Secondary bleeding/venous haemorrhage is usually characterised by the slow oozing of dark red blood and can manifest as liver clots.Liver clots, which are also known as currant jelly clots, are defined as red, jelly-like clots that are rich in haemoglobin from erythrocytes within the clot.Statistical analysis through a paired t-test and the Wilcoxon signed-rank test was applied for testing statistical significance. Results Out of the 40 patients, there were 33 males and 7 females.The mean BT was 192 seconds and the mean PC was 246,000/cc [Table 1]. HDDs that had adhered to the soft/hard tissue adjacent to the extraction site were easily removed after wetting with sterile normal saline, and there was no adherence to the sterile gauze piece used on the control site postoperatively.Time to haemostasis was noted for both the HDD and control surgical sites using a stopwatch.The study site achieved haemostasis at 53 seconds, which was a considerably shorter time than the control site at 918 seconds [Table 2].There was statistically improved haemostasis with the use of HDD. The postoperative pain experienced by patients throughout the week while performing day-to-day activities such as eating, tooth brushing, etc. were recorded as 0 being no pain and 10 being the worst pain the patient had ever experienced.The average pain score at the study site (1.74) was considerably less than the control site (5.26) [Table 2]. Although we did not experience any cases of liver clots, sinus opening, pus discharge, or dry socket, there was comparatively better epithelialization with the use of HDD.The 29 study sites showed significantly better healing as compared to the control sites [Table 2]. Discussion Clopidogrel and ticlopidine inhibit adenosine diphosphate (ADP)-induced platelet fibrinogen binding whereas aspirin inhibits the activity of cyclooxygenase.Platelets are affected for the life of the cell, and complete reversal of antiplatelet activity does not occur for approximately 2 weeks.In this study, the HDD was used to control bleeding in extraction wounds in patients receiving OAT.The results show that in every patient the time to haemostasis was shorter when using HDD than when using the control.Shen et al. showed a release of growth factor from human platelets stimulated by chitosan exposure, which may help explain our positive findings. 21Cunha-Reis et al. showed cell adhesion consistent with the nature of the HDD material used in this study. 22In our study, sites receiving the HDD had improved postoperative healing with minimal complications when compared to the control site.This may be attributed to the antibacterial properties of chitosan, which have been investigated in vitro studies. 23,24Results demonstrated that chitosan increased permeability of the inner and outer membranes and ultimately disrupted the bacterial cell membranes, releasing their contents.Thus HDD provides an antibacterial barrier against a wide range of Gram positive and Gram negative organisms, including methicillin-resistant Staphylococcus aureus (MRSA), vancomycinresistant enterococcus (VRE), and Acinetobacter baumannii. 23,24In an animal study, Azargoon et al. found HDD to be as efficacious ferric sulfate in haemostasis and wound healing. 25he self-adhesive nature of HDD is caused by the electrostatic attraction of RBC to the HemCon material.As the RBCs bind to the HDD surface, it forms a dense viscous mass that provides adhesion, and also adapts to the alveolar bone's irregularities under digital pressure, thereby providing frictional locking with the bony socket.Considering the competency of HDD in forming a barrier that seals the wound from exposure to the environment, we can state that only a small amount of HDD (i.e. about ½ of a 10 mm x 12 mm piece) is required to attain complete haemostasis [Figure 1].There was no clinical indication of the necessity to pack the extraction socket fully, and the excess was easily trimmed chair side, according to the patients' needs.Additionally, we observed initial slightly raised pain scores in sites with HDD, which subsided once the acetic acid was fully dissolved in oral fluids.To work, HDD material requires active bleeding; thus, the more bleeding takes place, the better the HDD material performs which is quite useful during surgical procedures. Table 1 : Details of patient age and various investigations (N = 40) Table 2 : Comparison of bleeding time, pain score and quality of healing
v3-fos-license
2019-05-24T13:07:05.788Z
2019-05-16T00:00:00.000
162168923
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/trgh.2019.0011", "pdf_hash": "aa47dbe0d5b8b990555e1dc07aa6c6574c4ece16", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1326", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "sha1": "aa47dbe0d5b8b990555e1dc07aa6c6574c4ece16", "year": 2019 }
pes2o/s2orc
Toward a Protocol for Transmasculine Voice: A Service Evaluation of the Voice and Communication Therapy Group Program, Including Long-Term Follow-Up for Trans Men at the London Gender Identity Clinic Abstract Purpose: A service evaluation was undertaken with 10 participants identifying as trans men who received voice and communication group therapy and 12-month follow-up at the London Gender Identity Clinic between February 2017 and March 2018, to investigate levels of satisfaction, how helpful they found the program in facilitating vocal change and skill development, and whether they would recommend it to others. Methods: Participant evaluations of overall and ideal rating of masculinity of voice, and level of feeling comfortable with voice, evaluations of voice skills and changes in speaking and reading fundamental frequency were retrospectively reviewed and analyzed. Results: Six participants reported being very satisfied with the service; four were satisfied. Eight participants found the program very helpful in achieving voice and communication change; two found it helpful. Eight strongly agreed and two agreed with recommending the service. Participants' overall and comfort ratings of voice significantly increased (p<0.01), while there was no significant change in ideal ratings (p=0.063), and a significant decrease in the difference between overall and ideal ratings (p<0.01). Participants achieved a significant decrease in fundamental frequency for reading and speaking (p<0.01), a significant decrease in voice fatigue (p=0.039) and restriction in voice adaptability (p<0.01), a significant increase in confidence in public speaking (p<0.01), but no significant change in vocal projection (p=0.07). Conclusion: Ten trans men reported high levels of satisfaction with the voice group program and long-term follow-up, making significant positive shifts in voice skills and vocal self-perception. These findings apply locally but suggest appropriate interventions toward a transmasculine voice modification protocol. Introduction Transmasculine people form a diverse group, 1 and studies addressing the invisibility of this population, the psychosocial impact of voice, self-perception of vocal masculinity, and experience of voice and communication therapy services are starting to emerge. 2,3 ''Transmasculine'' is an overarching term used in this article to refer to individuals assigned female at birth who have a more masculine, sometimes nonbinary, identity; signaling birth assignation, though, requires sensitive handling as it may be experienced as shaming. 4,5 Vocal researchers tend to report lowering of the speaking fundamental frequency (F 0 ) as a result of the action of exogenous androgen therapy in thickening vocal fold mass to gender-confirming and satisfactorily masculine-sounding levels. 2,3,[6][7][8] This has led to claims that transmasculine people experience fewer barriers to achieving their desired vocal identity than transfeminine people, 9,10 and that transmasculine voice therapy is unnecessary. 11 However, while self-perception of voice improves for many transmasculine people, 12 pitch change outcomes with testosterone can be highly variable 13,14 and satisfaction levels with vocal change suboptimal. 15 Davies et al. 16 emphasize that transmasculine individuals commenced on testosterone frequently report an enduring difference between their habitual and ''passing'' pitch and a high occurrence of vocal misgendering. Indeed, there is growing evidence that transmasculine people have particular needs beyond the testosterone-induced effect on pitch in terms of developing voice and communication skills in dynamic psychosocial contexts. 14,17 Azul 1 considers the ''vocal situation'' of transmasculine speakers to be potentially challenging as a result of the interplay between complex factors: presentational (the anatomy and physiology of the speaker/singer's voice and their vocalcommunicative behaviors), attributional (the listener's perception and meanings attributed to the speaker/ singer's voice), and normative (the cultural, environmental, and heterocisnormative lens through which concepts of gendered voice and vocal function are viewed and experienced). Clinical practice needs to take into account the diversity of this population and the complexity of factors influencing successful and effective voice function as part of gender congruence. 18 Nygren et al. 19 recommend systematic assessment and a therapy focus, addressing safe vocal change as part of complete identity. Transmasculine people's vocal identities are beginning to be understood within and beyond a desire to achieve an unequivocal masculine end-result in binary cisnormative terms. 20,21 Tackling the social invisibility of this population and creating opportunities for transmasculine people to discover personal vocal and communicative authenticity are paramount. 15,16,22 Voice and communication interventions applied to transmasculine people are beginning to be tested. 16,21,22 Azul et al. 18 state that more research is needed exploring the parameters of functional voice production and communication skills relevant to transmasculine people, placing participants' self-evaluation at the forefront of the enquiry. Mills et al. 21 report early stages of developing a voice and communication protocol for Vocal embodiment: effects of binding, rib and back stretches 17,[21][22][23]31 Posture and embodiment of voice 17,22,31 Laryngograph pitch measurement and discussion regarding cisnormative parameters 16,17,21,22 Managing risks of speaking up 17,21,22 Exploring safe pitch change with or without testosterone 17,21,22 Presence and personal impact 24 Feedback and discussion from group members 17 Resonance: jaw and base of tongue release 17,21,22,27,31 Mindfulness 17,26 Presence and personal impact 24 Follow-up and review sessions 17,21,22 Optimizing breath support with increased vocal fold mass from testosterone 17,22,30,31 Compassion focused awareness 29 Group discussion of authenticity, heterocisnormative bias, and stereotyping 17,20-22 Resonance: developing chest and pharyngeal resonance with low humming, chest tapping, yawn talk 17,[21][22][23]31 Group process and trust and collaboration 15 Assertiveness training 17,21,22,24 Voice education and voice care: managing changes on testosterone and optimizing efficient power-source relationship 17,20,22 Role play scenarios and improvisation 17,21,22 Role play scenarios and improvisation 17,21,22 Interrelation between loudness and intonation parameters 17,21,22 Voice projection and articulatory muscularity 17,[21][22][23]30,31 transmasculine people through a pilot and follow-up voice group program. Azul's factors 1 above formed a useful framework for program content that addressed vocal dynamics 21-23 as part of presence and personal impact, 24 and took account of the perception of others (Table 1). Client input on what was considered most useful in voice therapy was central to producing a practical guide for trans and nonbinary people, including transmasculine voice. 17 In addition, studies using approaches that are solution focused, 25 mindful, 26 systematic, 27 narrative, 28 and compassion focused 29 further informed interventions in vocal dynamics (pitch, resonance, loudness, intonation, voice quality) 17,23,30,31 and social communication (public speaking, projection, assertiveness, nonverbal signals, presence) 17,21,22,24 offered in group contexts. 16,17,21,32 Group therapy programs have been reported as effective for transfeminine and transmasculine people because group cohesion, commonality of experience, shared learning, feedback, and witnessing, all act as a catalyst for voice and communication change. 17,21,22,32 This article describes a service evaluation of the voice and communication therapy group program at the London Gender Identity Clinic, which consisted of two workshops, and follow-ups at 6 and 12-months for a group identifying specifically as trans men (a subgroup of transmasculine people identifying as men while affirming their history as assigned female sex at birth). The aims were to investigate levels of service user satisfaction, how helpful they found the program in facilitating vocal change and skill development (indicated by self-perception ratings and pitch measures), and whether they would recommend the program to other service users. Design The service evaluation received written approval from the Tavistock and Portman NHS Foundation Trust Clinical Audit Offices, and service users' informed consent for participation in the evaluation was gathered before the start of the project. It involved a retrospective review of clinical data of one cohort of voice group participants between February 2017 and March 2018, which included qualitative service evaluation questionnaires, participant self-evaluations of voice and voice skills, and follow-up interview, and quantitative measures of modal speaking and reading fundamental frequency (SFF and RFF). It describes a sample of 10 transmasculine people, identifying as trans men, who attended a generic information-giving seminar as a waiting list initiative and subsequently participated in the voice masculinization therapy group program. This program, delivered by two senior gender specialist speech and language therapists, consisted of two 3-h workshops held a month apart, with follow-up appointments at 6 months, and then, the 12-month post-workshop 2. Workshops took account of Azul's ''vocal situation'' framework, 1 delivering interventions in voice change mechanics and communication, shown in Table 1. Participants' mean age was 26.2 years (range 19-43 years). Five participants had commenced testosterone before the workshop but reported dissatisfaction with their vocal development. Of these five participants, mean length of time on testosterone was 11.6 months (range 6-18 months). Four other participants commenced testosterone at the 6-month follow-up, and one stopped before the 12-month follow-up; one participant preferred not to start testosterone at all. Measures Participants filled out a service evaluation questionnaire where they were asked how satisfied they were with the voice group (1 = very dissatisfied, 5 = very satisfied), how helpful they found the program (1 = very unhelpful, 5 = very helpful), and the extent to which they agreed with a statement about recommending the service to others (1 = strongly disagree, 5 = strongly agree). Participants also filled out a self-report questionnaire in which they were asked to rate their voice on three overarching dimensions: the overall perception of how their voice sounded on a feminine-to-masculine scale (1 = very feminine, 10 = very masculine), how they would ideally like their voice to sound (using the same scale), and how comfortable they felt with their voice (1 = very uncomfortable, 10 = very comfortable). Measures were taken at the beginning of the first workshop, at the end of the second workshop, and at the 6-and 12-month follow-up time points. Participants were also asked to rate their development in a number of voice and communication skills at the beginning of workshop 1 and the end of workshop 2: vocal adaptability, voice projection, public speaking and relational presence, and vocal stamina (reduction in vocal fatigue). Objective laryngographic pitch measures of RFF and SFF were taken at the beginning of workshop 1 and the end of workshop 2. The Rainbow Passage 33 was used for a reading sample and a 2-min monologue topic on a hobby/interest was used for speaking. Brief focused interviews were conducted at 12-month follow-up, in which participants completed service evaluation questionnaires and were asked to relate what had been significant in their voice and communication journey in terms of skills and progress. All data were reviewed by the two senior treating speech and language therapists, verified by a third senior speech and language therapist in the service, and analyzed by an assistant psychologist and researcher. Variables were identified as attendance rates, motivation with exploration and home practice, timing of testosterone therapy, and participant experience of vocal change process. Statistical and thematic analysis Four-level repeated-measures analysis of variance (ANOVA) and paired sample t-tests were conducted to identify any significant changes in the measures (participant evaluations and pitch measures) across the different time points. Participant interview narratives at the 12-month follow-up were reviewed. Raw data were coded for theme development and interpretation based on theme frequency, juxtaposition, interrelationship of participant meaning-making, and experience of voice group and voice and communication development process. Service evaluation When asked how satisfied they were with the voice group, four participants said they were satisfied and six said they were very satisfied. When asked how helpful they found the group, two participants said they found it helpful and eight said they found it very helpful. When asked if they would recommend the service to others, two participants said they agreed and eight said that they strongly agreed. These results are shown in Figure 1. Figure 2 shows the changes in mean overall, ideal, and comfort ratings over the four time points, as well as the difference between the overall and ideal ratings. Mean scores and standard deviations for responses to the main questionnaire are shown in Table 2. To measure the changes in overall, ideal, and comfort ratings across the four time slots, a four-level repeated-measures ANOVA was conducted for each of the three measures. When an analysis indicated a statistically significant change across time slots, a series of paired sample t-tests were conducted between each time point to identify which differences were significant. Table 3. Table 4). Participant self-evaluations The difference between overall and ideal ratings. To assess how the difference between the overall and ideal ratings changed over time, a repeated-measures ANOVA was conducted. The results of the analysis showed a significant decrease in difference between the two across the four time slots [F(3, 27) = 26.7, p < 0.01, g 2 = 0.51]. Six paired sample t-tests demonstrated significant decreases over time between each pairing of time slots, except between the second workshop and the 6-month follow-up (Table 5). Voice and communication skills At the beginning of the first and at the end of the second workshops, participants were asked to evaluate their skills in voice and communication-specifically: the extent to which the adaptability of their voice was restricted, how quickly their voice would fatigue, their ability to project their voice, and their confidence in public speaking. Figure 3 shows the differences in mean ratings for these measures. Means and standard deviations are shown in Table 6. To see if there was a significant change in self-reported levels of vocal skills between the two workshops, four paired sample t-tests were conducted comparing each of the four items at the first workshop and the second workshop. The analyses showed a significant decrease in rat- Pitch measures At the beginning of the first and at the end of the second workshops, participants' modal RFF and SFF pitches were measured. Means and standard deviations for speaking and reading pitch are shown in Table 7. To see if there was a significant change in pitch between the two workshops, two paired samples t-tests were conducted comparing the reading and speaking levels of pitch between the first and second workshops. These tests showed significant decreases for both speaking pitch [t(9) = 4.47, p < 0.01, d = 1.41] and reading pitch [t(9) = 4.37, p < 0.01, d = 1.38] between the two workshops. Qualitative thematic analysis Participant clinical interviews at the 12-month followup were coded, and a thematic analysis undertaken in terms of frequency of key words, phrases and common narratives of perceptions, feelings and experiences of voice and communication therapy, vocal function and development, and being in the group. Themes. Group learning: ''voice group and review really helped me to learn about how to use my voice better and more effectively at work and on the phone''; ''the group was super important as a safe space to explore not just my voice but communication.'' Embodying voice: ''learning voice projection and being assertive, and linking up my voice to my body has helped me with public speaking''; ''my voice is hooked up to my body more now.'' Managing challenge and developing confidence: ''I can be more assertive in meetings now''; ''I have been able to raise the bar higher as I have developed my voice more.'' Voice exploration beyond pitch change: ''voice therapy was somewhere for my voice to grow in before I started t-that was really surprising and helpful''; ''I learned about the difference between loudness and expression in my voice and that was key to my confidence, even though I had already started t.'' Discussion The diversity reported by Azul 1 in the transmasculine population applies to the subpopulation in this sample of individuals who identify as trans men, evidenced by a range of highly personal self-constructs of gendered voice beyond the parameter of pitch alone. Voice and communication group therapy can offer a space not only to explore safe voice change but also those presentational, attributional normative, and diversity factors that contribute to individual style and behavior in social interaction. Notably, all participants were measured to use a personally meaningful and attainable lowered speaking and reading pitch after workshop 2, with no dysphonia (voice disorder). For those five not taking testosterone, pitch lowered by 0.5-1.5 semitones. This is an important implication for clinical practice as a marker for the limits of lowering pitch behaviorally without vocal hyperfunction. As participants' ratings of their overall sense of voice masculinity and comfort increased significantly from the first workshop to the 12-month follow-up, it seems that the benefits of comprehensive group programs can be both sustainable and transferrable into everyday life. In addition, qualitative data from client narratives at the 12-month follow-up, together with the decrease in the difference between participants' overall and ideal ratings for their voice, suggest that positive achievements were linked to increasing confidence, for example: ''I have been able to raise the bar higher as I developed my voice more.'' Self-evaluations confirmed that group therapy can address broader aspects of vocal function such as vocal stamina and flexibility, and more confident presentation of self, such as in public speaking. Participant ratings of vocal projection did not differ significantly from the beginning of workshop 1 to end of workshop 2, possibly suggesting that this is an advanced vocal skill requiring more opportunities for develop-ment. However, at the 12-month follow-up, narrative themes indicated that there was further development in this parameter (embodying voice theme). All participants reported the service as helpful in facilitating vocal change and voice skill development, and that they would recommend the program to other service users. While this evaluation cannot be generalized to other populations, it indicates that the following interventions addressing vocal function and situation were significant catalysts for change for this specific group, and the details add to what has already been described. 16,17,21,22 The interventions involved coaching and motor learning of specific voice skills, raising mindful awareness of the felt sense of body and voice in exercises generalizing to discursive contexts, and the choices available to individuals regarding relational, social presence: Voice education (vocal anatomy and physiology) 16,17,21,22,32 Voice care, in particular during vocal fold changes on testosterone and pitch monitoring 17,21,22 Vocal embodiment-effect of binding on resonance, rib and back stretches, jaw release, centered breathing, and grounding 17,[21][22][23]31 Optimizing breath support especially regarding vocal mass changes on testosterone 17,22,31 Chest and pharyngeal resonance developmentchest tapping, low humming, tongue root release, and jaw release 17,22,23,30,31 Presence and personal impact 17,21,24 Mindfulness and compassion 26,29 Role-play and improvisation of everyday speaking situations, for example, telephone speaking and interviewing 17,21,22 Voice projection-''twang'' voice quality, ''arcing'' voice, and muscular articulation development 17,[21][22][23]31 Assertiveness training 17,21,22 Discussion of norms, unconscious bias, and authenticity 17,21,22 A focus on solutions and giving/receiving constructive peer feedback. 17,21,22,25 Limitations SFF and RFF pitch measures, and self-evaluations of voice skill measures were not taken beyond workshop 2. Repetitions of these measures would have described potential relationship between voice skills and the comfort, overall, and ideal ratings and specific carryover beyond the workshops. Instead, these are thematically suggested in qualitative narrative terms alone. Service evaluation questionnaires were returned anonymously and descriptive statistics only could be described from the sample. The service evaluation findings cannot be generalized to other populations, and the local population sample of 10 is small. The participants, while all identifying as trans men, expressed highly individual voice and communication goals. Therefore, satisfaction with this protocol should be replicated and assessed among larger cohorts, not only of trans men but also of transmasculine and nonbinary individuals seeking masculinizing voice therapy, and in prospective research into the voice and communication therapy interventions. Conclusions Ten trans men receiving voice and communication group therapy and follow-up to 12-months reported high levels of satisfaction with the service, that it was helpful in facilitating voice change and vocal skill development, and that they would recommend it to others. They reported significant shifts in voice skills and self-evaluations of voice. The evaluation demonstrated that voice and communication interventions used in the service are significant in facilitating vocal situational change, and suggest inclusion in the development of a transmasculine voice modification protocol.
v3-fos-license
2019-05-16T13:24:41.023Z
2019-05-15T00:00:00.000
155093424
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://translationalneurodegeneration.biomedcentral.com/track/pdf/10.1186/s40035-019-0155-y", "pdf_hash": "67c2025d5cbd0395b1c09608e3128665de94f95a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1327", "s2fieldsofstudy": [ "Medicine" ], "sha1": "67c2025d5cbd0395b1c09608e3128665de94f95a", "year": 2019 }
pes2o/s2orc
Erythrocytic α-Synuclein as a potential biomarker for Parkinson’s disease Background Erythrocytes are a major source of peripheral α-synuclein (α-Syn). The goal of the current investigation is to evaluate erythrocytic total, oligomeric/aggregated, and phosphorylated α-Syn species as biomarkers of Parkinson’s disease (PD). PD and healthy control blood samples were collected along with extensive clinical history to determine whether total, phosphorylated, or aggregated α-Syn derived from erythrocytes (the major source of blood α-Syn) are more promising and consistent biomarkers for PD than are free α-Syn species in serum or plasma. Methods Using newly developed electrochemiluminescence assays, concentrations of erythrocytic total, aggregated and phosphorylated at Ser129 (pS129) α-Syn, separated into membrane and cytosolic components, were measured in 225 PD patients and 133 healthy controls and analyzed with extensive clinical measures. Results The total and aggregated α-Syn levels were significantly higher in the membrane fraction of PD patients compared to healthy controls, but without alterations in the cytosolic component. The pS129 level was remarkably higher in PD subjects than in controls in the cytosolic fraction, and to a lesser extent, higher in the membrane fraction. Combining age, erythrocytic membrane aggregated α-Syn, and cytosolic pS129 levels, a model generated by using logistic regression analysis was able to discriminate patients with PD from neurologically normal controls, with a sensitivity and a specificity of 72 and 68%, respectively. Conclusions These results suggest that total, aggregated and phosphorylated α-Syn levels are altered in PD erythrocytes and peripheral erythrocytic α-Syn is a potential PD biomarker that needs further validation. Electronic supplementary material The online version of this article (10.1186/s40035-019-0155-y) contains supplementary material, which is available to authorized users. Background Parkinson's disease (PD) is a common age-related movement disorder. Currently, clinical diagnosis of PD mainly relies on motor symptoms such as resting tremor, bradykinesia, muscle rigidity and balance disorders [1]. Previous studies have examined α-synuclein (α-Syn), a key protein critically involved in PD pathogenesis, as a potential biomarker. Most studies focus on cerebrospinal fluid (CSF) [2][3][4], which is in direct contact with the brain and spinal cord. However, obtaining CSF routinely at typical clinics is challenging, due to the invasive nature of the procedure and need for highly skilled staff to perform it. Additionally, the performance of CSF α-Syn in PD diagnosis has been shown to be only low to moderate [5,6]. Because collection of peripheral blood samples is considerably easier, defining biomarkers in blood has numerous advantages. However, assessment of plasma/ serum α-Syn levels has not yielded consistent results [7][8][9][10][11][12] partially because > 99% of blood α-Syn is located in erythrocytes [13], and hemolysis, in vivo or in vitro, markedly affects α-Syn values [14]. Further, the interaction of α-Syn with lipid membranes is implicated in its physiological and pathological roles [15][16][17], and PD patients exhibit morphological abnormalities of erythrocytes [18], possibly via the known effects of aggregated α-Syn on cell membranes [19,20]. These factors suggest that expression and function of α-Syn forms may differ between erythrocyte compartments, with each representing separate aspects of α-Syn pathology. Moreover, plasma oligomeric [21] and phosphorylated [22] α-Syn, two species associated with its mechanisms of toxicity [23][24][25][26][27][28], and α-Syn oligomers in erythrocytes [29,30], have also been measured (though not in separate cellular components) with encouraging results. However, these preliminary studies by us and others tested small sample cohorts with less robust immunoassays, and thus further independent validation studies are needed. The present study is designed to test the hypotheses that total, aggregated (including oligomers and larger, soluble aggregates) and/or phosphorylated α-Syn in membrane and cytosolic fractions derived from peripheral erythrocytes are altered in PD and could serve as biomarker candidates for either disease diagnosis or disease severity correlation. In this study, we have used a relatively large clinical cohort and developed more robust immunoassays to validate the potential of erythrocytic α-Syn as a PD biomarker. Our study examined α-Syn separately in the membrane and cytosolic fractions of the erythrocyte. The expression and function of α-Syn forms may differ between erythrocyte compartments, with each representing separate aspects of α-Syn pathology. Participants Standard protocol approvals, registrations, and patient consents: The study protocol was approved by the Institutional Review Boards of Peking University, Peking University Third Hospital, Beijing, China. Written consents were obtained from all subjects. The study cohort included 225 patients diagnosed with idiopathic PD, and 133 healthy control subjects recruited from Capital Medical University, Tiantan Hospital, Beijing, China. All PD patients met diagnostic criteria in accordance with those of the United Kingdom PD Society Brain Bank [31], and were treated with medication. All control participants were recruited from the physical examination center in TianTan Hospital, and were healthy subjects without history of any neurological disorders. Subjects underwent evaluation including medical history, and assessment of motor and cognitive functions. Any control subjects or PD patients with hematological diseases or inflammatory diseases were excluded from this study. The characteristics of the cohort are presented in Table 1, and are quite similar to those in our previous studies [32,33], including distribution of Unified Parkinson disease rating scale (UPDRS; part III) [34], a typical measure of movement dysfunction, and Montreal Cognitive Assessment (MoCA), a screening instrument for the detection of cognitive impairment or dementia in PD [35]. No subjects in the present study were included in our pilot study [29], which used a traditional ELISA and a smaller cohort to measure α-Syn oligomers in erythrocytes. Erythrocyte collection and separation Whole blood (5 ml) was collected in EDTA-coated tubes and aliquoted. The blood was centrifuged at 1500×g and 4°C for 10 min, and plasma and leukocytes were removed. Pelleted erythrocytes were washed three times in PBS and centrifuged at 1500×g for 10 min. The supernatant was removed and the pellets were aliquoted and stored at − 80°C within 90 min of blood collection. Samples were thawed only at the time of analysis. To separate the cytosolic and membrane fractions, erythrocytes were subjected to two sequential freeze (− 80°C) and thaw (room temperature) cycles, then centrifuged at 14000×g and 4°C for 10 min. The cytosolic protein-containing supernatant (cytosolic fraction) was removed and stored at − 80°C, while the membrane pellet was subsequently washed 3 times with PBS and centrifuged at 14000×g and 4°C for 10 min. The membrane pellet was solubilized with STET lysis buffer (0.1 mmol/ L NaCl, 10 mmol/L Tris pH 8.0, 1 mmol/L EDTA, 1% Triton 100), incubated on ice for 30 min, and centrifuged at 14000×g and 4°C for 10 min to pellet any remaining insoluble material. The membrane protein-containing supernatant (membrane fraction) was isolated and stored at − 80°C. The quality of the separation was assessed by probing specific membrane (glycophorin A [CD235a]) and cytoplasmic (glyceraldehyde-3-phosphate dehydrogenase [GAPDH]) proteins by western blot. Protein concentrations in erythrocyte membrane and cytosol fractions were measured using the bicinchoninic acid (BCA) protein assay kit (Pierce/Thermo Fisher Scientific, Rockford, IL, USA) at an absorbance of 562 nm relative to a protein standard. ). Phosphorylated standards were semisynthetic full length proteins generated by ligation of a recombinant peptide to a synthetic phosphopeptide. Filaments were generated by the manufacturer from purified monomers, and the concentration was assessed by BCA protein assay. Filaments were reconstituted in distilled, deionized water at a concentration of 1 mg/ml and frozen at − 80°C before use. Immediately before the assay was run, the calibrators were diluted in Diluent 35 (MSD, Rockville, MD, USA) to 1 μg/ml and sonicated for 1 min before preparation of the standard curve by serial dilution. Anti-α-Syn clone 42 (624,096, BD Bioscience, San Jose, CA, USA) was labelled with Sulfo-TAGs according to MSD's instructions and used as the detector for all three assays. Anti-α-Syn MJFR-1 clone 12.1 (ab138501, Abcam, Cambridge, MA, USA), conformation specific, anti-α-Syn filaments MJFR-14 (ab209538, Abcam), and anti-phosphorylated α-Syn at Ser129 (pS129; BioLegend, San Diego, CA, USA) antibodies were biotinylated and coated onto standard 96-well Meso Scale Discovery (MSD) U-Plex plates by incubating the plates with 1 μg/ ml capture antibody solutions for 2 h at room temperature with 600 rpm shaking, according to the manufacturer's instructions. After washing three times with 150 μl wash buffer (MSD), plates were blocked with 150 μL Diluent 35 (MSD) for 1 h while shaking at 600 rpm at room temperature, then washed three times in wash buffer. Samples were diluted (cytosol samples were diluted 1:10 5 and membrane samples were diluted 1:10 4 for the total α-Syn assay; both cytosol and membrane samples were diluted 1:100 for the aggregated α-Syn assay; cytosol samples were diluted 1:25 and membrane samples were diluted 1:15 for the pS129 assay; all in Diluent 35) and incubated with recombinant α-Syn standards for 1 h at room temperature and while shaking at 600 rpm. After washing three times, Sulfo-TAG-labelled anti-α-Syn clone 42 antibody (1μg/ml) was added and incubated for 1 h at room temperature with 600 rpm shaking. After washing three times, 150 μL of 2× Read Buffer T (MSD) was applied to each well and plates were analyzed in a Sector Imager 6000 (MSD). Data analysis was performed with the MSD Discovery Workbench 3.0 Data Analysis Toolbox. Spike-in recovery was performed to test the assay accuracy by spiking various concentrations of the corresponding standard proteins for each assay into the sample matrix and calculating the recovery as follows: (observed spiked sample concentrationobserved unspiked sample concentration) / expected added concentration * 100%. For the total α-Syn assay, 100, 200, and 400 pg/ml of unphosphorylated α-Syn monomers were spiked into the cytosolic or membrane erythrocyte matrix. For the aggregated α-Syn assay, 250, 500, and 1000 pg/ml of α-Syn filaments were used for the cytosolic erythrocyte matrix, and 50, 100, and 200 pg/ml of α-Syn filaments were used for the membrane erythrocyte matrix. For the pS129 assay, 250, 500, and 1000 pg/ml of α-Syn Phospho S129 were used for the cytosolic erythrocyte matrix, and 50, 100, and 200 pg/ml of α-Syn Phospho S129 were used for the membrane erythrocyte matrix. Statistical analysis Total α-Syn was normalized to total erythrocytic protein levels in the same subcellular compartment, and concentrations of aggregated and pS129 α-Syn were normalized to total α-Syn levels before analysis. Both raw and normalized values are reported. Because the biomarker data produced a skewed distribution that was not remedied by transformation of the variables, non-parametric Mann Whitney U test was used to compare group means and Spearman's rank correlation coefficient (ρ) was used to analyze correlation between biomarkers and PD severity or between different α-Syn species within each cellular compartment. P < 0.05 was considered significant. To generate a multivariable logistic regression model suitable to analyze independent influencing factors for PD diagnosis, binary logistic regression was performed using the Backward LR (likelihood ratio) method. The area under the receiver operating characteristic (ROC) curve was analyzed to determine the most appropriate cutoff values for PD and control groups. Analyses were performed using SPSS 23.0 software (SPSS Inc., Chicago, IL, USA) and GraphPad Prism 6 (GraphPad Software, La Jolla California USA). Establishment of the ECL assays To reliably quantify total, aggregated and pS129 levels in human erythrocytes, specific and sensitive novel ECL methods were developed on the MSD platform. The total α-Syn ECL assay has a broad detection range from 5 pg/ml to 10 ng/ml (Fig. 1a). The day-to-day and plate-to-plate signal variability were low (CVs < 10% within the 16 plates analyzed). Assay accuracy was measured by linearity-of-dilution, and by spiking human recombinant α-Syn monomer into erythrocytic cytosolic and membrane protein fractions. The recoveries of the linearity-of-dilution tests were 104.9 ± 0.3% and 102.6 ± 0.6% (Fig. 1b), and spike-in recoveries were 101.7 ± 2.2% and 104 ± 3.2%, for monomer cytosol and membrane fractions, respectively (Fig. 1c). The aggregated α-Syn assay utilizes an antibody that recognizes the conformational changes undergone by α-Syn upon aggregation. Both oligomers and larger soluble aggregates, including those derived from sonicated fibrils, together encompassed by the general term "aggregates" are recognized by the assay. The aggregated α-Syn assay has a detection range from 9 pg/ml to 10 ng/ml (Fig. 2a), and a low day-to-day and plate-to-plate signal variability (CVs < 10%). The recoveries of linearity-of-dilution for cytosolic and membrane fractions were 99.3 ± 0.4% and 101.8 ± 1.7%, respectively (Fig. 2b), and the spike-in recoveries for erythrocytic cytosolic and membrane samples were 94.5 ± 2.2% and 114 ± 2.3%, respectively. The specificity of the developed assay for aggregated α-Syn was tested by comparing the signal detected for monomeric α-Syn to soluble aggregate calibrator; very little signal was detected (e.g., < 2% of monomers compared to soluble aggregates at 10 ng/ml), suggesting low affinity of the assay for monomeric α-Syn species (Fig. 1d). When the aggregated calibrator was denatured by pre-treatment with 8 M urea for 3 h, followed by dilution, the oligomer-specific signal was eliminated. Addition of the same final concentration of urea (1 mM) in the assay had no effect on oligomer signal (Fig. 2d). To further demonstrate the specificity of the assay, we performed an experiment using the conformationspecific α-Syn antibody to capture oligomeric Aβ, followed by detection using either the same anti-α-Syn antibody used in the assay (to demonstrate the specificity of the overall assay), or an alternative detection antibody against Aβ (to demonstrate the specificity of the conformation-specific capture antibody in particular). The low resulting signal (1.5 and 2% of the signal using an equivalent concentration of aggregated α-Syn, respectively) further indicates the specificity for aggregated α-Syn (Fig. 2d). Additionally, we found that immunodepletion of either total (Fig. 2e) or aggregated (Fig. 2f) α-Syn, followed by measurement of the sample using the aggregated or total α-Syn assay, resulted in greatly reduced signal of aggregated α-Syn. Characterization of erythrocyte membrane and cytoplasmic component properties To determine the quality of the separation of erythrocyte membrane and cytoplasmic components, western blots were performed using antibodies against CD235a (a sialoglycoprotein present on erythrocyte membrane) and GAPDH (expressed in erythrocytic cytoplasm). CD235a was detectable only on the membrane fraction and GAPDH was present only in the cytoplasmic fraction, indicating good separation (Additional file 1: Figure S1). The relationship between erythrocytic α-Syn and PD diagnosis and severity We compared different α-Syn species by fraction in PD and control (raw and normalized values are reported in Table 2). Of note, erythrocyte content of blood samples varied substantially by subject. The concentrations of total α-Syn were normalized to total erythrocytic protein. In order to better separate changes in specific forms from changes in total α-Syn, the concentrations of aggregated and pS129 α-Syn were normalized to total α-Syn. In this cohort, total α-Syn was significantly higher in the membrane fraction of PD subjects compared to controls (p = 0.008; Fig. 3b, Table 2), but trended lower in the cytosolic fraction (p = 0.203; Fig. 3a, Table 2). There was no significant difference in erythrocytic aggregated α-Syn in the cytosolic fraction (p = 0.469; Fig. 3c, Table 2), but in the membrane fraction, it was significantly higher in PD than in control (p < 0.0005; Fig. 3d, Table Fig. 1 Establishment and characterization of the total α-Syn and pS129 ECL assay systems. a A representative standard curve of the the total α-Syn assay. The detection range was from 5 pg/ml to 10 ng/ml (R 2 = 0.999). b Accuracy of the total α-Syn assay was tested by using Linearity-ofdilution, with dilutions of 1:10 3 , 1:10 4 , 1:10 5 and 1:10 6 in Diluent 35 in the cytosol and 1:10 3 , 1:10 4 , and 1:10 5 in the membrane fraction. c Total α-Syn assay accuracy was also tested by spike-in recovery using 100, 200, and 400 pg/ml of unphosphorylated α-Syn monomers. d A representative standard curve of the pS129 assay (Black line, R 2 = 0.999). The detection range was from 10 pg/ml to 5 ng/ml. Assay specificity was measured by detecting unphosphorylated α-Syn monomers (red line), or unphosphorylated α-Syn aggregates (blue line) at the same concentrations. e Linearity-of-dilution of the pS129 assay was assessed using dilutions of 1:25, 1:50 and 1:100 in the cytosol and 1:7.5, 1:15 and 1:30 in the membrane fraction. f PS129 assay spike-in recovery was tested by spiking in 250, 500, and 1000 pg/ml of the pS129 standard in the cytosolic fraction and 50, 100, and 200 pg/ml of the standard in the membrane fraction 2). pS129 was higher in PD than in control, particularly in the cytosolic fraction (p < 0.0005; Fig. 3e, Table 2). Fig. 2 Establishment and characterization of the aggregated α-Syn ECL assay system. a The aggregated α-Syn standard curve was generated over a range of 9 pg/ml to 10 ng/ml (black line; R 2 = 0.999). Specificity was tested by measuring unphosphorylated (red line) or phosphorylated (blue line) monomeric species run in the same aggregated α-Syn assay. b Linearity-of-dilution of the aggregated α-Syn assay was assessed by using dilutions of 1:1000, 1:100 and 1:10 in the cytosolic and membrane fractions. c Spike-in recovery of the aggregated α-Syn assay was tested by spiking 250, 500, and 1000 pg/ml of α-Syn aggregates into the cytosolic fraction or 50, 100, and 200 pg/ml into the membrane fraction. d Specificity of MJFR14 conformational specific antibody was examined. Red line: aggregated α-Syn signals after dissociation using 8 M Urea treatment. Green line: aggregated α-Syn standard curves incubated with 1 mM Urea, the same final concentration as included in the disaggregated calibrator assay. Yellow line: Aβ oligomers detected using the aggregated α-Syn assay (MJFR14 antibody and anti-α-Syn detection antibody). Blue line: Aβ oligomers captured with the conformation-specific α-Syn antibody and an Aβ-specific detection antibody. e The total α-Syn concentrations (measured by using the total α-Syn assay) before and after immunoprecipitation using MJFR 1(recognizing "total" α-Syn, including monomeric and oligomeric/aggregated forms) or MJFR 14 (recognizing aggregated α-Syn only) in erythrocyte samples. f The aggregated α-Syn concentrations (measured by using the aggregated α-Syn assay) before and after immunoprecipitation using MJFR 1or MJFR 14 in erythrocyte samples To evaluate the diagnostic utility of erythrocyte α-Syn, a ROC analysis was performed based on each analyte independently. The areas under the curve (AUCs) of the individual analytes with the best separation between groups were 0.67 and 0.71 for erythrocytic membrane aggregated α-Syn and cytosolic pS129, respectively (Fig. 4). We also assessed the performance of the oligomeric/aggregated α-Syn alone in whole erythrocytes (normalized to erythrocyte total proteins), as in our previous pilot study [29], but the results (AUC = 0.76) were not confirmed in this larger, independent cohort with more robust ECL assays (AUC = 0.61; Additional file 1: Figure S2). Next, a step-wise logistic analysis was performed to select the best predictors, including the erythrocyte α-Syn a Total α-Syn concentrations were normalized to the total protein levels in the same erythrocytic subcellular compartment, and are expressed in unit of pg (total α-Syn)/μg (total protein) b Oligomeric or phospho (pS129) α-Syn concentrations were normalized to the corresponding total α-Syn levels, and are expressed in units of pg (oligo α-Syn)/μg (total α-Syn) and pg (pS129)/μg (total α-Syn), respectively PD Parkinson's disease, SEM standard error of the mean, α-Syn α-synuclein Fig. 3 The erythrocytic levels of α-Syn species in patients with Parkinson' disease (PD) and healthy controls. a Cytosolic total α-Syn, normalized to cytosolic total proteins (pg/μg); (b) Membrane total α-Syn, normalized to membrane total proteins (pg/μg); *, p = 0.008 ((Mann Whitney U test); (c) Cytosolic aggregated α-Syn, normalized to cytosolic total α-Syn (pg/μg); (d) Membrane aggregated α-Syn, normalized to membrane total α-Syn (pg/μg); ****, p < 0.0005; (e) Cytosolic pS129, normalized to cytosolic total α-Syn (pg/μg); ****, p < 0.0005; (f) Membrane pS129, normalized to membrane total α-Syn (pg/μg); ****: p < 0.0005 forms that differed most between PD and healthy controls, along with age and gender. Membrane aggregated α-Syn, cytosolic pS129, and age were algorithmically selected as the major influencing factors for PD diagnosis (Table 3), and included in the integrated model. Based on the ROC analysis, the model was able to discriminate PD from control with an AUC of 0.79 (95% CI 0.74-0.84). Sensitivity was 72% and specificity was 68% in this cohort with a cutoff of 0.58 (Fig. 4). We also collected clinical data including UPDRS III (motor) score, disease duration and MoCA score to reflect the severity of different aspects of PD. We found no significant correlations between erythrocyte α-Syn species and any of the three clinical measures except that erythrocytic membrane pS129 was negatively correlated with MoCA (Spearman Correlation Coefficient ρ = − 0.17, p = 0.011), though the clinical relevance of this observation needs to be further investigated. Discussion Development of highly sensitive and accurate ECL assays to quantify erythrocyte α-Syn in peripheral blood We established highly sensitive, accurate ECL assays to measure erythrocytic total, aggregated and pS129 α-Syn, with low intra-and inter-run variations, achieving sensitivity comparable to those using the Luminex platform [32,33] and much higher than traditional [37] or even recently improved [38] ELISA for total and pS129 α-Syn. We applied these assays to a large cohort with comprehensive clinical information collected, in order to assess their function as PD biomarkers in this study. Notably, the aggregated α-Syn assay uses a recently developed antibody that recognizes the conformation taken by α-Syn in oligomers and aggregates [39]. Thus, the specificity of the capture antibody is vital for interpreting the results of the assay. We performed several experiments to demonstrate the specificity of the antibody and assay (recognition of phosphorylated or unphosphorylated monomeric α-Syn, disrupting aggregated standards, immunodepletion, and detection of oligomeric Aβ). Together, these studies support the previous finding that the antibody is sensitive and specific for oligomeric/aggregated species. However, it should be considered that the antibody cannot distinguish between oligomers and filaments; thus, the exact species measured in this study are not known, and whether the identity of the aggregated species present, in addition to the quantity, differ between groups requires further study. Furthermore, while the calibrator used was generated from fibrils, the unknown endogenous species should be further studied using differing technologies in future studies. Almost all (99%) of the α-Syn present in blood is contained in erythrocytes [13]: plasma contains only 0.1% of blood α-syn, peripheral blood mononuclear cells (PBMCs) 0.05%, and platelets 0.2% [13]. Results obtained in plasma or serum α-Syn are quite inconsistent, likely due factors including differences in assays, sample handling, and, importantly, the extent of hemolysis [7-12, 22, 32]. This factor presents a major challenge, as differing hemolysis will result in high, variable levels of erythrocyte α-Syn contaminating the sample, potentially overwhelming the plasma or serum signal. Our study focused on whether erythrocytes, the major source of blood α-Syn, are altered in PD and controls. Because the erythrocytes were isolated and washed during the preparation procedure, contamination from the membrane or contents of cells disrupted during hemolysis are unlikely to play a major role in our assays, thus minimizing this problem that dramatically affects studies of serum or plasma. α-Syn in different components of erythrocytes is altered in PD The current investigation explored the possible involvement of different erythrocytic α-Syn forms in PD patients. Cytosolic pS129 α-Syn levels were significantly higher in PD, but there was no significant difference in cytosolic aggregated or total α-Syn between PD patients and controls and pS129 and total α-Syn were not associated (Additional file 1: Figure S3B). In contrast, erythrocytic membrane total and aggregated α-Syn levels were associated (Additional file 1: Figure S3D) and were both significantly higher in PD patients. It should also be emphasized that the concentrations of aggregated and pS129 α-Syn were normalized to corresponding total α-Syn levels, indicating that their increases in PD are most likely not due to alterations of total (including monomeric or unphosphorylated) α-Syn levels in the same subcellular compartment. This observation certainly raises the possibility that the increased plasma or serum α-Syn oligomers are derived from erythrocytes, and erythrocytic pS129 could be a major source of plasma pS129. The formation of Lewy bodies (LBs) is related to the misfolding, oligomerization and aggregation of α-Syn [40]. PD patients had higher levels of α-Syn oligomers in CSF [2,6,41] and plasma [21], although the source(s) of these oligomers is unknown. Recent studies have shown increased dimeric α-Syn in erythrocyte membranes [42], and elevated oligomeric/total α-Syn ratio in erythrocytes [30], in PD vs control patients. Further, a few studies have observed differences in post-translationally modified α-Syn forms in erythrocytes between PD patients and healthy controls [43,44]. However, most of these studies, including our pilot study [29], tested small sample cohorts with less robust immunoassays. When assessing aggregated α-Syn in whole erythrocytes, unfortunately, our previously reported performance for PD diagnosis was largely not confirmed in the current study with a much larger, independent cohort and more robust ECL assays. Additionally, no correlation between erythrocyte α-Syn and disease duration, age, or motor scale score in PD patients has been evidenced [29], and most of these studies examined α-Syn in erythrocyte lysates, potentially missing differences in the sorting of α-Syn by fraction, or sub-populations of the protein, thus highlighting the need for further studies. In the present study, we found that aggregated α-Syn in the membrane, but not the cytosolic, fraction was significantly higher in PD than in controls. This could be at least partially explained by altered lipid membrane composition in PD [45], which may affect α-Syn and membrane interactions as well as α-Syn aggregation [46]. On the other hand, remarkable morphological disorder of erythrocytes, exhibiting membrane spikes and eryptosis (programmed red cell death), has been reported in PD [18]. Given that α-Syn could progressively aggregate unevenly at the surface of the membrane, and α-Syn oligomers/aggregates could disturb the normal biological membranes, it is possible that increased membrane oligomeric α-Syn contributes to the morphological abnormalities of erythrocytes seen in PD patients [18]. Further support for this argument can be found in studies showing that α-Syn progressively aggregated unevenly at the surface of membrane, and that both lipid and membrane proteins were incorporated in the aggregates [47]. A caveat is that the dopaminergic medication utilized by all PD patients included in this study may be a potential confounding factor, as dopamine is reported to induce α-Syn oligomerization [48]. Whether this factor affects the aggregated α-Syn measurements needs to be further investigated. α-Syn in Lewy bodies from PD patients is hyperphosphorylated at S129 [49][50][51][52], which may contribute to PD pathogenesis, as α-Syn becomes more susceptible to aggregation [23,24,50] and more toxic [23][24][25] when it is converted to pS129. α-Syn phosphorylation may also occur after LB formation [53], and pS129 accumulation in the brain could represent a late event in disease progression [51,54]. It is thus plausible that the aberrant phosphorylation of α-Syn may promote LB clearance or degradation [53]. Nonetheless, cross-sectional studies in brain tissue and CSF indicate increased phosphorylated α-Syn in PD [22,50,55,56], promoting further investigation of pS129 as a PD biomarker. In the present study, we found that pS129 significantly increased in erythrocytes of PD patients, suggesting that aberrant phosphorylation of α-Syn may also occur in peripheral blood (erythrocytes), consistent with previous findings in plasma [21]. Whether this change in blood occurs during the early or late stages of the disease and whether the peripheral α-Syn changes could contribute to PD development and progression in the brain should be further investigated (see more discussion in section 4.4 below). Nonetheless, our current data suggests that peripheral erythrocytic pS129 could be a novel biomarker for PD diagnosis. Additionally, its correlation with MoCA in PD patients might suggest a relationship between erythrocytic pS129 and PD severity (cognition) to be further investigated. Notably, the pS129 to total α-Syn ratio in erythrocytic fractions was about 1%. In contrast, the same ratio in CSF was reported to be 15-25% in previous studies [33,41,55,57]. Although these ratios might not be directly comparable, as different pS129 and total α-Syn immunoassays were used in the studies, it is possible that the phosphorylation of α-Syn is pathologically more relevant in CSF than in erythrocytes. That said, because biomarkers in peripheral blood are needed, changes in pS129 in erythrocytes or other blood fractions, likely when combined with other changes such as those observed in the current study, could still be clinical useful. Erythrocytic α-Syn for PD diagnosis and severity We discovered that erythrocytic membrane aggregated α-Syn and cytosolic pS129 are potential biomarkers for PD diagnosis, particularly when coupled with age, where they achieved a sensitivity and specificity comparable to those based on CSF α-Syn values [2,32], but using a sample source that is both more accessible and less sensitive to hemolysis. They could potentially be further improved by adding other factors, such as erythrocyte Aβ and tau [58]. Whether changes in erythrocytic α-Syn species can be reliably and consistently observed at early or even pre-clinical disease stages, i.e., whether they could be used as early or pre-clinical PD biomarkers, needs to be further studied. To date, it remains difficult to assess PD progression objectively. Correlation of CSF or peripheral α-Syn with PD severity and/or PD progression has been inconsistent [2, 7, 10-12, 29, 32]., but an association of CSF α-Syn with non-motor symptoms or cognitive decline in PD has been suggested [59][60][61]. In this study, erythrocytic pS129 correlated modestly with the severity of PD cognitive symptoms, though whether this finding is significant will depend on independent validation, particularly in longitudinally collected samples. α-Syn peripheral and central transport and the implications of altered peripheral erythrocytic α-Syn in PD Earlier observations of aggregated α-Syn in neurons grafted into brains of PD patients suggested cell-to-cell transfer of α-Syn in a possible prion-like fashion [62]. Other studies have shown that non-fibrillar (monomeric or oligomeric) α-Syn can be secreted into the cell medium via cell-derived extracellular vesicles (EVs) such as exosomes [63]. Our recent study found that not only could α-Syn-containing EVs derived from cultured human erythrocytes pass through the blood-brain barrier, but erythrocyte EVs obtained from the blood of PD patients induced a greater pro-inflammatory response in microglia than did those from control subjects [64]. The hypothesis of peripheral-to-CNS transport of α-Syn is surely in line with observations demonstrating: (1) that EVs or exosomes are an important route for transporting α-Syn species from the periphery to the CNS [63,64]; and (2) erythrocytes play an important role in the peripheral aspects of PD onset [13,18,29,64]. Whether increased erythrocytic and plasma/serum α-Syn observed in the current and previous studies, free or contained in exosomes, could be transported to the CNS, contributing to CNS pathology, remains to be investigated. If such transfer occurs, this study is in line with the idea that peripheral changes in PD could influence the disease development and progression in the brain. Conclusions In summary, this study has demonstrated the usefulness of newly developed ECL assays in assessing total, aggregated and pS129 α-Syn contained in human erythrocytes. Although further independent validation is needed, this investigation suggested that erythrocytic α-Syn species are potential peripheral biomarkers for PD diagnosis, with sensitivity and specificity from an integrated model comparable to what can be achieved by CSF α-Syn. More interestingly, the observed alterations in erythrocytic α-Syn levels in PD, together with other existing evidence, supported that such peripheral changes could influence the disease development and progression in the brain. Additional file Additional file 1: Figure S1. Characterization of erythrocyte membrane and cytoplasmic component properties. Figure S2. The receiver operating characteristic curve for agrregated α-Syn in whole erythrocytes.
v3-fos-license
2021-07-03T06:16:59.010Z
2021-06-01T00:00:00.000
235714350
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/18/12/6518/pdf", "pdf_hash": "9437143fbbcf1f4d552d32e5b3f531ae3ee7a2ec", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1328", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cbe7c9bf14c96695e865dc079c3fd37a4d4e80ec", "year": 2021 }
pes2o/s2orc
Use of the Hospital Survey of Patient Safety Culture in Norwegian Hospitals: A Systematic Review This review aims to provide an overview of empirical studies using the HSOPSC in Norway and to develop recommendations for further research on patient safety culture. Oria, an online catalogue of scientific databases, was searched for patient safety culture in February 2021. In addition, three articles were identified via Google Scholar searches. Out of 113 retrieved articles, a total of 20 articles were included in our review. These were divided into three categories: seven perception studies, six intervention studies, and seven reliability and validation studies. The first study conducted in Norway indicated a need to improve patient safety culture. Only one intervention study was able to substantially improve patient safety culture. The validity of HSOPSC is supported in most studies. However, one study indicated poor quality in relation to the testing of criteria related to validity. This review is limited to Norwegian healthcare but has several relevant implications across the research field, namely that intervention studies should (1) validate dimensions more carefully, (2) avoid pitfalls related to both factor analysis methods and criteria validity testing, (3) consider integrating structural models into multilevel improvement programs, and (4) benefit from applying different, new versions of HSOPSC developed in Norway. Introduction Patient safety culture consists of the attitudes and routines among healthcare personnel and management that impact patient treatment [1,2]. A positive patient safety culture includes a focus on establishing systems, routines, resources, and infrastructure to reduce risks and errors [3]. Studies indicate an association between a positive patient safety culture and safe patient treatment [3][4][5]. In 2004, the Agency for Healthcare Research and Quality (AHRQ) launched the Hospital Survey on Patient Safety Culture (HSOPSC) version 1.0 to assess patient safety culture in hospitals [1,2]. HSOPSC includes 42 items grouped into 12 composite measures, or composites. Seven dimensions target the unit level, three dimensions target the hospital level, and two composites are outcome measures (overall perception of patient safety and frequency of events reported). HSOPSC also includes two questions that ask respondents to provide an overall grade on patient safety for their work area/unit and to indicate the number of events they reported over the past 12 months. Hospitals have the opportunity to benchmark results against other datasets [6], or potentially against previous baseline measures, to monitor development over time and to evaluate improvement initiatives. All of the measures are illustrated in Figure 1. The survey also includes limited background demographic information (work area/unit, staff position, etc.). As of September 2020, HSOPSC 1.0 has been administered in 95 countries and translated into 43 languages [7]. In Norway, the first two studies assessing patient safety culture using the HSOPSC were conducted in 2006 and 2008 at Stavanger University Hospital [8,9]. Hence, Norway applied HSOPSC relatively early after the instrument was developed. However, in other sectors and industries, such as the aviation and petroleum sectors and the nuclear industry, assessment of safety culture was already a tradition [10][11][12], so it was certainly not too early to assess safety culture in healthcare settings. As of September 2020, HSOPSC 1.0 has been administered in 95 countries and translated into 43 languages [7]. In Norway, the first two studies assessing patient safety culture using the HSOPSC were conducted in 2006 and 2008 at Stavanger University Hospital [8,9]. Hence, Norway applied HSOPSC relatively early after the instrument was developed. However, in other sectors and industries, such as the aviation and petroleum sectors and the nuclear industry, assessment of safety culture was already a tradition [10][11][12], so it was certainly not too early to assess safety culture in healthcare settings. One literature review examined the psychometric properties of several questionnaires designed to measure the safety climate in healthcare [13]. The authors concluded that the HSOPSC covers the most central dimensions of safety culture, and it meets psychometric criteria such as content-and criterion-related validity and internal reliability [13]. Moreover, it was presented as the most comprehensive validated instrument in healthcare, an evaluation which has been supported by several studies [13][14][15][16]. Therefore, HSOPSC is a potentially important tool for improving patient safety [2]. The aims of this study were (1) to review empirical studies using HSOPSC in Norway and (2) to develop recommendations for further research on patient safety culture based on our findings. Materials and Methods Data searches were conducted in Oria, an online catalogue of scientific databases which allows for broad searches across different databases and can be used to find printed and electronic resources at the University Library in Norway. Additional information concerning the sources included from the Oria search is listed in Table S1. An Oria search includes the Central Discovery Index from ExLibris. This broad search strategy evolved based on discussions with an experienced librarian. Oria searches also include MEDLINE and CHINAL (Table S2). Searches in Oria were performed using the terms "Hospital Survey on Patient Safety Culture" OR "HSOPSC" AND "Norway". The searches were conducted between 12 February 2021 and 18 February 2021. In addition, one article was identified by exploring Norwegian researchers (e.g., "Storm", "Haugen", "Reierstad", "Vifladt") conducting patient safety culture studies. These Norwegian-based authors are listed in Appendix A. This search was conducted using Google Scholar. Moreover, two articles were identified by exploring all papers referring to the first validation study of HSOPSC [16] in Norway using Google Scholar. These were not found in the first search One literature review examined the psychometric properties of several questionnaires designed to measure the safety climate in healthcare [13]. The authors concluded that the HSOPSC covers the most central dimensions of safety culture, and it meets psychometric criteria such as content-and criterion-related validity and internal reliability [13]. Moreover, it was presented as the most comprehensive validated instrument in healthcare, an evaluation which has been supported by several studies [13][14][15][16]. Therefore, HSOPSC is a potentially important tool for improving patient safety [2]. The aims of this study were (1) to review empirical studies using HSOPSC in Norway and (2) to develop recommendations for further research on patient safety culture based on our findings. Materials and Methods Data searches were conducted in Oria, an online catalogue of scientific databases which allows for broad searches across different databases and can be used to find printed and electronic resources at the University Library in Norway. Additional information concerning the sources included from the Oria search is listed in Table S1. An Oria search includes the Central Discovery Index from ExLibris. This broad search strategy evolved based on discussions with an experienced librarian. Oria searches also include MEDLINE and CHINAL (Table S2). Searches in Oria were performed using the terms "Hospital Survey on Patient Safety Culture" OR "HSOPSC" AND "Norway". The searches were conducted between 12 February 2021 and 18 February 2021. In addition, one article was identified by exploring Norwegian researchers (e.g., "Storm", "Haugen", "Reierstad", "Vifladt") conducting patient safety culture studies. These Norwegian-based authors are listed in Appendix A. This search was conducted using Google Scholar. Moreover, two articles were identified by exploring all papers referring to the first validation study of HSOPSC [16] in Norway using Google Scholar. These were not found in the first search since they were published in books. Hence, several steps were conducted to ensure compliance with the inclusion and exclusion criteria described below. The study adheres to the PRISMA guidelines for systematic reviews [17]. The inclusion criteria were as follows: (1) the studies were conducted in Norway in a hospital setting, (2) the hospital version of HSOPSC was used (not the nursing home version), and (3) the heading and summary were written in English. The exclusion criterion was nonempirical studies (e.g., study protocols). The study selection PRISMA flowchart is presented in Figure 2. since they were published in books. Hence, several steps were conducted to ensure compliance with the inclusion and exclusion criteria described below. The study adheres to the PRISMA guidelines for systematic reviews [17]. The inclusion criteria were as follows: (1) the studies were conducted in Norway in a hospital setting, (2) the hospital version of HSOPSC was used (not the nursing home version), and (3) the heading and summary were written in English. The exclusion criterion was nonempirical studies (e.g., study protocols). The study selection PRISMA flowchart is presented in Figure 2. Results A total of 20 articles were included. These were divided into three categories: seven perception studies, six intervention studies, and seven reliability and validation studies. Perception Studies Seven studies were categorized as perception studies [8,[18][19][20][21][22][23]. Some of these made comparisons with other samples [8,18] as well as repeated measures to monitor change over time [19]. In the first Norwegian study [8], the mean ± standard deviation (M ± SD) was reported. The strongest HSOPSC dimensions were "Teamwork within units" (M ± SD = 3.84 ± 0.60) and "Supervisor/manager expectations and actions promoting safety" (M ± SD = 3.82 ± 0.68). These scores indicate that the mean scores were almost at the level of Agree and were substantially lower than the maximum score of 5. "Organizational management support for safety" had the largest improvement potential, with a mean score (M ± SD = 2.90 ± 0.75) marginally lower than the level of Neither Agree nor Disagree. For the Results A total of 20 articles were included. These were divided into three categories: seven perception studies, six intervention studies, and seven reliability and validation studies. Perception Studies Seven studies were categorized as perception studies [8,[18][19][20][21][22][23]. Some of these made comparisons with other samples [8,18] as well as repeated measures to monitor change over time [19]. In the first Norwegian study [8], the mean ± standard deviation (M ± SD) was reported. The strongest HSOPSC dimensions were "Teamwork within units" (M ± SD = 3.84 ± 0.60) and "Supervisor/manager expectations and actions promoting safety" (M ± SD = 3.82 ± 0.68). These scores indicate that the mean scores were almost at the level of Agree and were substantially lower than the maximum score of 5. "Organizational management support for safety" had the largest improvement potential, with a mean score (M ± SD = 2.90 ± 0.75) marginally lower than the level of Neither Agree nor Disagree. For the lowest scoring dimension, the standard deviation was also higher, indicating that the perception of this culture dimension is more diverse among staff. Hence, it is enlightening to assess the standard deviation when interpreting the results. The findings indicated that this Norwegian hospital needed to improve patient safety culture and that more or different investments were necessary to achieve this. Moreover, the study also revealed that safety culture dimensions had lower scores compared with those in US hospitals [8], as well as lower scores than in the petroleum industry [18]. Another finding was that safety culture scores are challenging to improve and relatively stable over time [19]. Another study correlated HSOPSC dimensions with burnout and sense of coherence [23]. Findings from this study indicated that a positive safety culture was associated with the absence of burnout and a high ability to cope with stressful situations. As such, the study indicates that safety culture in hospitals is related to employees' health and stress at work. Intervention Studies Six studies involved interventions [24][25][26][27][28][29]. One of these intervention studies reported greater improvement than the others [25]. This study was conducted at Haukeland University Hospital, and HSOPSC measures were collected in 2009, 2010, and 2017. The researchers conducted a stepped wedge cluster randomized controlled trial implementation of the World Health Organization (WHO) Surgical Safety Checklists, combined with the implementation of a broader patient safety program. From 2009 to 2017, significant improvement was found in the following dimensions: "Unit managers' support to patient safety", "Continuous improvement", "Teamwork in unit", "Error feedback", "Nonpunitive", "Hospital managers support to patient safety", "Teamwork across units", and "Information handoffs and transitions". The largest positive changes were related to "Hospital managers' support to patient safety", from 2.83 at the baseline in 2009 to a mean score of 3.15 in 2017. Other intervention studies also reported improvements, but these were generally weaker and reported a shorter intervention period. Aaberg et al. [24,28] found improvement in three HSOPSC dimensions in their two studies: "Teamwork within unit", "Manager expectations and actions promoting patient safety", and "Communication openness". Storm et al. [27] focused their interventions at the interorganizational level. In the hospital part of the study, small improvements were reported for "Overall perceptions of patient safety culture" and "Organizational learning-continuous improvement" [27]. Moreover, one intervention study [29] compared changes in registered nurses' perception of HSOPSC dimensions in restructured and nonrestructured intensive care units [29] during a fouryear period. In this study, restructuring was associated with negative developments in "Manager expectations and actions promoting safety", "Teamwork within hospital units", and "Adequate staffing". Haugen et al. [26] found significant positive changes in the checklist intervention group for the culture factors "Frequency of events reported" and "Adequate staffing". Thus, the effects of the intervention were weak since only two dimensions improved. Reliability and Validation Studies Seven studies were categorized as reliability or validation studies [16,[30][31][32][33][34][35]. Confirmatory factor analysis (CFA) was performed to assess the quality of the measurement model of HSOPSC in the hospital [16,30,31,35] and prehospital settings [31]. The CFA indicated that HSOPSC was a valid and reliable tool for measuring patient safety culture in Norwegian hospitals. Some adjustments were made to the prehospital version, which was labeled PreHSOPSC [31]. Moreover, some items were removed in the development of a short version of HSOPSC, labeled HSOPSC-short [30]. A Short Safety Climate Survey (SSCS) was also developed in Norway, based on HSOPSC, for use in nonhealthcare settings. SSCS is basically similar to the HSOPSC-S, but without the term "patient". With this adjustment, SSCS can function as a generic instrument to assess safety culture across sectors [30]. One study at Haukeland University Hospital [34] explored the factorial model of HSOPSC dimensions with exploratory factor analysis (EFA) using principal component analysis with Varimax rotation. Since EFA is a dimension reduction method, it was not surprising that the factorial model ended in fewer factors than the original model, namely 10 dimensions instead of 12. However, the study used the original 12-dimensional structure when investigating reliability and conducting benchmarks [34], without confirming the original version of the instrument with CFA. Another study used EFA before using CFA, but this was to develop and validate the abovementioned SSCS and HSOPSC-short [30]. Three of the studies developed and assessed theoretical models with the use of structural equation modeling (SEM), in combination with CFA, or both CFA and EFA [30,32,35]. The first study explored the possibility of a common structural model measuring associations between safety dimensions and safety behavior in the healthcare and petroleum sectors, which was supported [30]. Another SEM study [35] developed and investigated how five selected HSOPSC dimensions influenced safety behavior and overall perceptions of patient safety. Another study [32] investigated a model adapted for the prehospital environment, measuring associations between safety concepts and the outcome dimension "Transitions and handoffs". These SEM studies are related to, and support, the nomothetical validity of HSOPSC. One study aimed at testing the criterion-related validity of HSOPSC [33]. Only two medical departments took part in the study, and several HSOPSC dimensions were correlated with adverse events. The Global Trigger Tool (GTT) was used to collect data on adverse events. The study found an inverse association between patient safety culture and adverse events and hence did not support the criterion-related validity of HSOPSC. Discussion Perception studies in Norway have been important for investigating the level of patient safety culture in hospitals, to reveal both strengths and areas for improvement. Studies have revealed that patient safety culture is more positive in US hospitals and the petroleum sector than in Norwegian hospitals [8,18]. This remains a challenge and shows the importance of continuing to focus on improving patient safety culture in Norway. This review revealed that safety culture dimensions in hospital settings are difficult to improve and can be very stable over time [19]. Moreover, implementing organizational changes, such as restructuring, can even reduce the level of patient safety dimensions [29]. Hence, organizations should never take the challenge of improving and changing patient safety culture lightly. The included intervention studies demonstrated that interventions most often improve very few of the HSOPSC dimensions [24,[26][27][28]. Hence, interventions at the team and department levels will normally not improve all of the HSOPSC dimensions. Again, this confirms that realism should be integrated into safety culture improvement efforts. Improving safety culture takes time, is difficult, and can even be hampered by other organizational initiatives [29]. However, one study [25] showed that it is actually possible to change and improve patient safety culture more extensively during an eight-year period. This was achieved through a broad patient safety program, fostering engagement between trust boards, hospital managers, and frontline operating theatre personnel and thus enabling the effective implementation of the Surgical Safety Checklist. This demonstrates the complexity and endurance needed to improve HSOPSC dimensions more thoroughly in hospital settings. Other hospitals should look at this study, as well as the experiences of other industries [9,36], when developing safety improvement programs. Additionally, safety programs should integrate theory and valid measures. Appropriate sampling and data collection methods, units of analysis, levels of data measurement and aggregation, and statistical analyses are also important factors when evaluating such programs and outcomes [37,38]. Validity and reliability are not heavily documented in the included intervention studies. To help to determine the effect of nesting on the results, intraclass correlations (ICCs) can be computed to determine if substantial variation exists between groups compared to variation within groups. ICC describes how strongly units in the same group resemble each other [39], which is relevant to test when conducting interventions. Another challenge concerns aggregation; if the ICC is low, it is a counterargument for aggregating culture scores at the organization level [40]. In turn, this can influence the effects of interventions, making it necessary to add design effects, for instance, when there are many groups with few individuals within each group [39]. These challenges and issues were not integrated nor controlled for in the Norwegian HSOPSC intervention studies. The first study in Norway [16] showed that the translated Norwegian version of HSOPSC had satisfactory reliability and validity and could be recommended for use in Norwegian hospitals. Further studies should continue to explore the psychometric qualities of HSOPSC in different settings and over time. Since HSOPSC is a standardized instrument, confirmatory factor analysis (CFA) is the appropriate procedure for validation, and not exploratory factor analysis (EFA). If researchers want to test HSOPSC with EFA, then this should be combined with CFA. However, the most problematic validity concern revealed in this review involves the study aimed at testing the criterion-related validity of HSOPSC [33]. The level of shared variance was not reported in this study, nor was CFA, and only two medical departments took part in the study. One way of handling such data is to aggregate the HSOPSC survey data at the department level before conducting correlations with Global Trigger Tool (GTT) data, which was not done in the study [33]. CFA was not conducted either, nor was, for instance, ICC to test the level of shared variance. Interestingly, Farup [33] also emphasizes other concerns in the study: "Since the GTT never detects all adverse events and the proportion detected is unknown, the results do not indicate the true prevalence of adverse events." After referring to other studies [41][42][43], Farup also points to the fact that these studies "unveil major problems related to registration of adverse events and demonstrate that the GTT probably is inappropriate for comparisons between units, departments, and hospitals, as an indicator of the true prevalence of adverse events." Surprisingly, however, Farup did not follow his own recommendations to avoid these pitfalls and Type II error. Hence, the combination of challenges in this study demonstrates the complexity of establishing criterion-related validity for measurement instruments, which is important to focus on when testing criterion validity related to the HSOPSC instrument. Future studies should look carefully at these issues, as well as other recommendations [38], to avoid these pitfalls and to better investigate the criterion-related validity of HSOPSC. The three studies focused on developing and testing theoretical models with the use of structural equation modeling (SEM) [30,32,35] illustrate the importance of a systems approach to improve safety and specifically patient safety; several factors work in combination and contribute directly and indirectly to the variation in outcome measures, both in the hospital [31] and prehospital settings [32], as well as in a petroleum sector study [30]. Additional research is needed to gain insight into the mechanisms that mediate or moderate improvement efforts for patient safety culture in different settings. We suggest using a multilevel approach emphasizing that all levels in the organization have important safety functions and influence performance at the individual level through behavioral expectancies [9]. Notably, findings from the structural model studies being developed on the basis of theory correspond with the findings from the most successful intervention study in Norway [25]; wider strategic safety initiatives at different levels are needed to improve safety culture more substantially. An interesting future possibility will be to conduct intervention studies building on the structural models being validated and developed in Norway. Limitations This review has some possible limitations. To compensate for the limitations of Oria, three studies were found based on a hand search. With this combined procedure, we assume that all relevant studies related to the inclusion and exclusion criteria were identified. Included studies were limited to Norway. Hence, studies from other countries were not included. Moreover, we did not use any specific method for the synthesis of the results. Due to the studies' heterogeneity, we did not perform a meta-analysis or a statistical synthesis of findings. Neither did we assess the risk of bias in the included studies. The studies were categorized based on the study approach, which we assumed to be appropriate. We recommend using the bibliography developed by AHRQ (https://www.ahrq. gov/sops/bibliography/index.html-accessed on 15 March 2021) to learn more about international studies based on HSOPSC. To give an example, 60 studies focusing on improving patient safety culture are listed in this bibliography. Hence, this review does not provide a global assessment of all studies and topics related to HSOPSC. Based on the generic areas discussed in this review, we still believe the results are generalizable beyond Norwegian healthcare settings. Conclusions The aims of this study were to review empirical studies using HSOPSC in Norway and to develop recommendations for further research on patient safety culture based on our findings. Several studies using the HSOPSC have been conducted in Norway, but not at a national level. Our findings indicate that comprehensive improvement of patient safety culture in hospitals is challenging and may take several years of systematic work. Moreover, experiences from Norway indicate that wider strategic safety initiatives at different levels are needed to improve safety culture more substantially. Research should aim for a more stringent methodological approach. CFA, rather than EFA, should be applied to replicate the dimensional factor structure of HSOPSC. Furthermore, establishing criterion validity is particularly difficult and challenging. We urge future research to avoid possible pitfalls. As a basis for the development of future intervention studies, researchers designing interventions could use the results from the SEM studies to develop more holistic and theoretically sound interventions, including the horizontal and vertical involvement of units and staff. Intervention studies should not take for granted that the reliability and validity of HSOPSC is adequate based on previous studies. It is always a potential pitfall that effective interventions can be evaluated as noneffective if and when psychometric properties of HSOSPC are problematic in certain settings. Researchers can benefit from applying different new versions of HSOPSC that have been developed in Norway: SSCS [30], HOSPSC-short [30], and PreHSOPSC [31]. SSCS has been developed to fit nonhealthcare settings. HSOPSCshort has fewer items and is therefore optimal for combining with other scales, such as work climate dimensions, bullying, job performance, job satisfaction, and work ability [44]. The combination of such scales is highly relevant since safety culture relates to other work factors. PreHSOPSC has been developed to better fit prehospital settings and is probably the best alternative for measuring safety culture in these settings. Hence, this review offers several recommendations for further research that are also relevant for improving safety culture in healthcare. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ijerph18126518/s1, Table S1: All sources, Table S2: MEDLINE and CHINAL. Book section E. Olsen, K. Aase [19] 2012 The challenge of improving safety culture in hospitals: A longitudinal study using hospital survey on patient safety culture.
v3-fos-license
2018-12-12T19:09:51.971Z
2018-02-09T00:00:00.000
55315431
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ccsenet.org/journal/index.php/jas/article/download/71713/40388", "pdf_hash": "b90412ba4c92640d51eb028b3278c1e4efa43308", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1332", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "b90412ba4c92640d51eb028b3278c1e4efa43308", "year": 2018 }
pes2o/s2orc
Effect of Male Planting Date and Female Plant Population on Hybrid Maize Yield and Evaluation of Use of Hybrid-Maize Simulation Model for Grain Yield Estimation in Hybrid Maize Seed Production The study was carried out to determine the effect of male planting date (MPD) and female plant population (FPP) on the grain yield (GY) performance of a three-way hybrid and to evaluate Hybrid-Maize simulation model for grain yield estimation in hybrid seed maize production. Fifteen treatment combinations of five MPD as a deviation from the female planting date and three FPP replicated three times were used. The Hybrid-Maize simulation model programme was used to forecast the possible GY outcomes for the fifteen treatments of the experiment using estimated parameters and weather data for the 2006/7 season. The field experiment produced significant (P < 0.005) main effects but non-significant interaction effects for GY, yield components and antheis-silking interval (ASI). Female seed yield was affected by time of male pollen shed relative to female silking: ASI, with highest yields associated with close synchrony (ASI= +/-3 days). ASI had a significant effect on the number of kernels per ear (KPE), with the greatest KPE (318) associated with an ASI of +/-3 days. FPP effects on yield are typical for maize, showing a curvilinear response from low to high density. The optimum population density for GY was 5.4 plants m. Simulation output from the Hybrid-Maize simulation model showed an overestimation of GY compare to the observed yield. Furthermore, the model was unable to predict yields for the low FPP of 2.7 plants m. We found that Hybrid-Maize simulation model has limited potential for simulating hybrid maize seed production, as it does not accommodate limitations that may occur during the growing season: difference in male and female planting dates, pollen density and dispersion. Hence, the fixed parameters for the Hybrid-Maize simulation model can only be used in maize commercial production. Introduction Maize hybrid seed is a source of subsistence, an embodiment of technological change and vital input for commercial maize agricultural production (Tripp, 2001).A response to the expected rise in demand for maize is inevitable according to a report by Rosegrant et al. (1995).World demand in 2020 is predicated to rise to about 138% of the 1995 demand.Given the limited opportunities for augmenting maize area in most countries, future output growth must come from intensifying production on current maize land.Shortage of maize hybrid seed in southern Africa is a major challenge considering efforts underway to increase maize production (Havazvidi & Tatterfied, 2006).Seed production and distribution is currently associated with reduced production base, poor seed quality, increased marketing outlets and increased marketing costs.Therefore there is the need to have increased yield per unit land area to sustain the market as well as to offset costs. Genetic Materials Parents of a CIMMYT three-way hybrid (CML395/CML444//CML443) were used in this research: single cross female parent (CML395/CML444) and an inbred male parent (CML443).The experiment was laid out according to a 5 × 3 two-way factorial arrangement in a Randomised Complete Block Design (RCBD).Treatments consisted of fifteen treatment combinations of first factor: five male planting dates (MPD) as a deviation from the female planting date (FPD) and second factor: three female plant populations [26,666 plants ha -1 (low), 53,333 plants ha -1 (medium) and 80,000 plants ha -1 (high)].The treatments were assigned randomly within blocks, with each treatment appearing once per block.The number of blocks was used as replications in which three replicates were used to produce 45 plots for the experiment. A female: male planting ratio of 3:1, which is commonly used in seed production, was used in this trial.Each plot occupied 66 m 2 with border rows surrounding the block and border plots separating plots to minimize cross-pollination across the block and within the plots respectively.The experiment was isolated by distance and time to ensure that there was no cross pollination with adjacent fields.Detasselling of female single cross was done before they started shedding pollen: when the top 3-4 cm of the tassel were visible above the whorl and this continued on a daily basis until complete.Shoot begging of the female single cross ears was also carried out and shoot begs were only removed on plots where the male lines were shedding pollen to ensure that the pollen was coming from the specific male inbred line within a plot.The ears were covered back with the shoot bags as soon as the male line had reached complete anthesis stage per plot.Therefore, the source of pollen was only the specific male line within a given plot.Shoot begs were eventually removed after the silks had dried off and the ears were allowed to reach field maturity before harvesting commenced. Field Management Ploughing was carried out using a tractor-drawn heavy disc plough in September 2006 at CIMMYT-Harare Research Station.A pre-marked wire was used to mark planting stations at spacing of 0.75 m between rows and 0.25 m within rows 4 m in length.Two seeds were sown by hand per planting hill and seedlings were thinned per planting hill four weeks after planting to achieve the three plant densities of 2.7 plants per m 2 (26,666 plants per hectare), 5.3 plants per m 2 (53,333 plants per hectare) and 8.0 plants per m 2 (80,000 plants per hectare). A basal fertilizer application of 400 kg/ha of compound D fertilizer (8% N:14% P 2 O 5 :7% K 2 O) was broadcast and disc-incorporated by a tractor.Topdressing was split applied using ammonium nitrate (34.5% N): first application of 200 kg/ha was done at four weeks after crop emergence soon after thinning and the second, also of 200 kg/ha was done six weeks after crop emergence.The trial was mainly rain-fed, however, irrigation water was applied when necessary, for example, under dry planting to facilitate germination and in the case of a long dry spell.Irrigation scheduling was determined by the stage of development of the plants and temperature.In general, an irrigation of seven mm/hr for six hours was applied just after planting to facilitate germination and thereafter irrigation interval ranged from 9 to 15 days depending on crop stage of development and temperature. Trait Measurements Measurements of variables: plant population density, planting dates of male line and female single cross, days to anthesis (DA), days to silking (DS), grain yield components and root lodging, were carried out in the net plot: three central female rows.The measurements were carried out at various stages of development and the data was used in the Hybrid-Maize simulation model as input data and also for general Analysis of Variance (ANOVA) for estimating potential yield and assessing the actual data. Hybrid-Maize Simulation Model Running the Hybrid-Maize simulation model in yield forecasting mode allowed real-time, in-season simulation of maize growth up to the date of simulation run, and also allowed forecasting of the possible outcome in final yield based on the up-to-date weather data of the current growing season, supplemented by the previously collected historical weather data for the University of Zimbabwe farm.Yield forecasts were made until the last day of the 2006-7 seasons in the weather file.To use the yield-forecasting mode, a weather data file containing 17 years of reliable historical weather data [year, day, solar (MJ m -2 ), temperature-high ( o C), temperature-low ( o C), relative humidity (%), and rainfall (mm)] was used, in addition to weather data for 2006-7 seasons.Hybrid-Maize simulation model could not separate difference in male and female planting dates.Hence, female planting dates (FPD) were used to run the programme. Statistical Analysis Analysis of variance (ANOVA) for grain yield components was performed using Agro-base GII Statistical Package (Agronomix Software Inc., 2007).A general linear model was used for the analyses of variance. Results An analysis of variance of the main effects of MPD and FPP, and the interaction of MPD and FPP for the following traits: DS, DA, ASI, EPP, plant density (PD), harvest density (HD), ear density (ED), kernels per ear (KPE), thousand kernel weight (TKW) and grain yield (GY) data is presented in (Table 1).There was no significant effect of MPD, FPP and the interaction between MPD and FPP on the number of days from sowing of the female plants to silking (DS).There was a significant difference (P < 0.05) for number of days from sowing to anthesis of the male plants for MPD.A delay in the MPD was accompanied by an increase in the number of days from sowing to anthesis of the male plants.There was highly significant difference (P < 0.001) for ASI as a result of different male planting dates.ASI ranged from 7 to -15 days for the different MPD.Close synchrony between pollen shed of male inbred lines and silking of the female single cross (ASI = -3 days) was observed when male and female plants were sown on the same day.There was no close synchrony between pollen shed of male inbred lines and silking of the female single cross (ASI = > 3 days or < 3 days) for all the other male planting dates.FPP and the interaction of MPD and FPP had no observable effect on the ASI for parental components used, as their mean squares were not significant.The relationship between GY and ASI showed that GY was greatest (6.81 t ha -1 ) where there was close synchrony between pollen shed of male plants and silking of the female single cross (ASI = +/-3 days) (Figure not presented).GY was less when ASI was either less than three days or greater than three days.A significant curvilinear regression was obtained between GY and ASI (R 2 = 0.94). The relationship between GY and female plant density (FPD) is presented in Figure 2 and a quadratic equation was fitted to the data.As plant density increased from a low FPP of 3.1 plants m -2 to a medium FPP of 5.0 plants m -2 , there was a corresponding increase in GY from 3.20 t ha -1 to 5.30 t ha -1 .Yield declined from the medium FPP to the high FPP of 6.4 plants m -2 .Based on the curvilinear relationship the estimated maximum GY was obtained at a FPP of 5.4 plants m -2 . Figure 2. Relationship between grain yield and female plant density The relationship between grain yield and female harvest density showed that lowest grain yields were noted for low HD (3 plants m -2 ) for all MPD with the exception of MPD +10 days where the highest HD (6 plants m -2 ) had the lowest yield of (0.9 t ha -1 ) (Figure not presented).Increased HD from low to medium HD resulted in a corresponding increase in GY.A general decline in GY was noted with further increase in HD from medium to high HD (6 plants m -2 ) with the exception of MPD of -5 and +5 days where there was a continuous increased grain yield of 8.07 and 3.70 t ha -1 respectively.Harvest density may also be related to the linear relationship between PD at emergence and ED at harvest (Figure not presented).This explains the general relationship of GY and HD.A positive correlation (R 2 = 0.97) between PD at emergence and plant or ear density at harvest was noted.At low FPP, ED was greater than HD, while at high FPP, ED was less than HD. Yield components may help to understand variation in GY of seed maize across environments.Yield components of the female of the three-way hybrid in relation to PD, ASI, EPP, KPE and TKW varied as a function of MPD and FPP. The relationship between ED and GY showed a positive correlation (R 2 = 0.97), which showed that an increase in ED resulted in a corresponding increase in GY (Figure not presented).Maximum GY (5.3 t ha -1 ) was obtained at ED of 4.71 ears m -2 .Further increase in ED from 4.71 ears m -2 resulted in a decline in GY. A negative correlation (R 2 = 0.53) between EPP and PD at emergence showed that an increase in PD at emergence resulted in a corresponding decline in EPP (Figure not presented).An increase in PD at emergence from 3.1 plants m -2 to 5.0 plants m -2 resulted in a decline in EPP from 1.11 to 0.98 ears m -2 .Further increase in PD at emergence to 6.20 ears m -2 resulted in a continuous decline of EPP to 0.96 ears m -2 . The quadratic equations fitted to the data on relationship between GY and FPD for the two FPP (high and medium) showed that there was a significant correlation (R 2 = 0.97) between grain yield and female planting date for high (8.0 plants m -2 ) and medium (5.3 plants m -2 ) FPP, respectively (Figure 3).GY was greatest for high FPP as compared to medium FPP.Quadratic equation for low FPP is missing, as the simulation model could not deal with low population density (2.7 plants m -2 ). Simulation Output for Hybrid-Maize Simulation Model Hybrid-Maize simulation model simulates the growth and yield of maize so as to enable the evaluation of grain yield using different combinations of planting date and plant density.The greatest yield of 12.04 t ha -1 was noted for high FPP (80 000 plants ha -1 ) (Table 2) and the data showed that the general trend was that the greatest yield was obtained for high FPP for the entire female planting dates.The model could not deal with low FPP (2.7 plants m -2 ) resulting in missing output that was noted for the simulation model output.The model could also not cope with a hybrid seed field situation of male and female planting dates, which are different.As a result FPD used in this trial were assumed to be MPD.Thus, FPD in the model was used to estimate GY.Note.* Missing data from output. Comparison of Predicted Yield and Observed Yield Comparison of model predicted grain yield and observed grain yield (Figure 4) showed that there was an over estimation of predicted versus the observed yield.At low observed yield there was a high-predicted yield. Discussion A short ASI is a key trait for obtaining high grain yield in maize seed production (Bolanos & Edmeades, 1993).It is the same case in hybrid maize production where female plants are totally dependant on male plants for the supply of pollen.In this experiment, a shift in the interval from close synchrony (+/-3 days) was associated with a decline in GY for the female of the three-way hybrid.Similar results were documented in reports by Bolanos andEdmeades (1993), andEdemeades et al. (2000).An increased ASI could reduce kernel number because of lack of pollen for late-appearing silks while early appearing silks may have reduced receptivity to the pollen.A shift in ASI from close synchrony reduced grain yield.Close synchrony between pollen shed of male and silking of female (ASI = +/-3 days) gave the greatest yield.There are two possible reasons for such a significant enhancement in grain yield: i) a much larger fraction of late-emerging silks are pollinated when pollen shed is delayed relative to silking hence there is prolongation of the effective flowering period; and ii) all the early emerging silks are pollinated as well because they remain receptive to pollen for several days after they appear (Bassetti & Westgate, 1993a, 1993b).This showed that timing of silking of the female population in relation to pollen shed of male population is a crucial management variable in hybrid seed production as it impact on potential grain yield as evidenced by the highly significant variability in grain yield for ASI.These results are similar to findings by Edemeades et al. (2000), and Bolanos and Edmeades (1993), showing significant increase in ASI when plants were exposed to drought during the time bracketing flowering and consequently causing reduction in GY. Delaying pollen shed to maximize pollination by late planting of male line (MPD = +5 and +10 days) did not increase GY but increased the potential risk of out-crossing from foreign pollen sources.This was contrary to literature: Fonseca et al (2004) using simulated data reported that delaying pollen shed from the original 1.2 to 3 days resulted in nearly 68% of the silks being pollinated causing a 23% increase in potential kernel yield.If the interval were increased to 5 days, potential kernel yield would be increased by about 38%, indicating the potential of increasing GY by late planting of male line.Hence, the best approach to managing floral synchrony will depend on the time from planting to pollen shed and silking of the respective parents. Maize is sensitive to intra-specific competition as evidenced by the highly significant effect of FPP on GY.Stand density affects plant architecture, alters growth and developmental patterns and influences carbohydrates production and partition.Increased FPP from low (3.1 plants m -2 ) to medium (5.30 plants m -2 ) density resulted in increased yield.A further increase in FPP from medium to high (6.4 plants m -2 ) resulted in a decline in GY in agreement with reports by Edmeades and Daynard (1979a), Tetiokago and Gardner (1988), Echarte et al. (2000), and Sangoi et al. (2002).For each production system there is a population that maximizes the utilization of available resources, allowing the expression of maximum attainable GY on the environment.In this work, optimum density was noted from the regression equation to be 5.4 plants m -2 .When the number of individual plants per unit area was increased beyond this optimum density, there was a series of consequences that were detrimental to ear ontogeny and resulted in barrenness hence the decline in GY (Sangoi et al., 2002). Decline in GY when plant density increased beyond the optimum density is usually associated with a decline in the harvest index and increased stem lodging caused by increase in inter-plant competition for solar radiation, ) Observed Yield (t ha -1 ) soil nutrients and soil water (Tollenaar et al., 2000).This also results in limited supplies of photosynthetic photon flux density, carbon and soil nutrients and consequently increases barrenness and decreases kernel number per plant, kernel size and kernel weight.Edmeades et al. (2000) also reported interplant and intraplant competition affecting ASI as the underlying cause of the significant reduction in GY.Intraplant competition may exist between ear and stem or root growth resulting in significant decline in GY. Relationship between Grain Yield, Harvest Density and Other Yield Components Relationship between GY and HD showed that as HD increased there was an increase in GY and GY showed a response to HD that produced maximum value at the optimum HD.At low HD, grain yield was not compensated by increased KPE, TKW or EPP while substantial low EPP occurred above the optimum HD (Tetiokago & Gardener, 1988).However, exceptions were observed in this experiment contrary to other reports in literature.A continuous increase in grain yield as HD increased from medium to high for MPD (-5 and +5) was noted.This could be accounted for by the fact that increased HD was used as an efficient management tool for maximizing grain yield by increasing the capture of solar radiation within the canopy for the two MPD.The linear relationship between the GY and HD was further explained by the linear relationship between plant density at emergence and plant or ear density at harvest.As plant density increased at emergence a corresponding increase in the number of plants or ears harvested at maturity was noted.This was contrary to a report by Sangoi et al. (2002), which showed that high rates of planting slowed the rates of axillary buds more than they do the shoot apex. Grain yield and its components: EPP, KPE and TKW showed a dependence on the ASI.According to a report by Edmeades et al. (2000), GY and its component: KPE, show a dependence on ASI of the general form GY = exp (a+b*ASI) .In this experiment, for all measured yield components there was a general significant reduction in GY with increase in main effects of MPD and FPP. Relationship between Plant Density at Emergence and Ears per Plant An increase in PD at emergence resulted in a corresponding decline in EPP; the linear relationship indicated negative correlation (R 2 = 0.53), being noted in agreement with some reports in literature (Edmeades & Daynard, 1979).According Edmeades and Daynard (1979) as plant density is increased, the ratio of ear growth rate (i.e.rachis + developing grain) to total shoot growth declines drastically.This decline can be attributed largely to decline in radiation reaching the ear leaf at high densities relative to low and medium population densities.The ear leaf provides a large proportion of assimilates to the ear.An unfavourable environmental condition through intraplant competition reduces dry matter partitioning from the ear leaf to the ear resulting in cessation of ear development and ear abortion.This is illustrated in this work by the reduction in EPP with an increase in PD at emergence.High plant densities produced low plant growth rate (PGR), whereas low plant densities induced high PGR. Simulation Output for Hybrid-Maize Simulation Model Contrary to several reports (Jones & Kiniry, 1986;Yang et al., 2006) Hybrid-Maize simulation model had an overestimation of GY potential of the female of the three-way hybrid for high and medium FPP.Greatest GY was noted for high FPP (12.04 t ha -1 ).Delay in the female planting date resulted in a decline in yield for the two FPP as a result of reduced growing season for the late planted female population and also due to reduced grain filling period.An over estimate of the actual yield was also noted from the simulation output which might be due to the fact that the Hybrid-Maize simulation model did not take into account other limiting factors during the run of the season, which might have also reduced the potential yield.Its inability to separate male and female planting dates in seed production is another factor that might have contributed to an over estimation of the actual yield.The assumption from the model is that there is no limitation in pollen density and timing of pollen shed, which is contrary to variation brought about by the MPD and use of a male inbred line.The inability of the simulation model to estimate yield for low FPP (2.7 plants m -2 ) is another limitation the model has for simulating hybrid maize seed production. Conclusion Female seed yield was affected by time of male pollen shed relative to female silking (ASI), with highest yields associated with close synchrony (ASI= +/-3 days).Also, female seed yield was greatest at medium FPP of 5.4 plants m -2 .ASI had a significant effect on KPE, with the greatest KPE (318) associated with close synchrony (ASI= +/-3 days).Specific optimum plant density and male planting dates in relation to the female for hybrids should be determined to attain maximum GY in maize seed production. From this study we found that Hybrid-Maize simulation model has limited potential for simulating hybrid maize seed production, as it does not accommodate limitations that may occur during the growing season: difference in male and female planting dates and pollen density and dispersion.Hence, the fixed parameters for the Hybrid-Maize simulation model can only be used in maize commercial production. Simulation of maize grain yield in hybrid maize seed production can only be done if the model has the ability to: a) Deal separately with male and female planting dates.b) Determine pollen flow from male plants to female plants.c) Deal with variation of population ratio for both male and female parents. We suggest that seed producing companies may use the Hybrid-Maize simulation model to determine the yield potential of three-way hybrids, taking other factors of production constant except low female population density. Figure 3 . Figure 3. Relationship between grain yield and female planting date Figure 4 . Figure 4. Comparison of predicted yield and observed yield of the maize three-way hybrid Table 1A . Mean square values of main effects of yield and yield components of the female of a three-way hybrid in seed production set up Table1B.Summary of means of main effects of yield and yield components of the female of three-way hybrid in seed production set up Table 2 . Grain yield output for the Hybrid-Maize simulation model
v3-fos-license
2016-09-02T05:11:00.000Z
2016-09-02T00:00:00.000
16374857
{ "extfieldsofstudy": [ "Physics", "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep37256.pdf", "pdf_hash": "60a2b8c099d11631750d223492c553275f452d82", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1334", "s2fieldsofstudy": [ "Physics" ], "sha1": "862706e7e60c57a8f104b1c4008c651bd2c5715a", "year": 2016 }
pes2o/s2orc
Challenges and constraints of dynamically emerged source and sink in atomtronic circuits: From closed-system to open-system approaches While batteries offer electronic source and sink for electronic devices, atomic analogues of source and sink and their theoretical descriptions have been a challenge in cold-atom systems. Here we consider dynamically emerged local potentials as controllable source and sink for bosonic atoms. Although a sink potential can collect bosons in equilibrium and indicate its usefulness in the adiabatic limit, sudden switching of the potential exhibits low effectiveness in pushing bosons into it. This is due to conservation of energy and particle in isolated systems such as cold atoms. By varying the potential depth and interaction strength, the systems can further exhibit averse response, where a deeper emerged potential attracts less bosonic atoms into it. To explore possibilities for improving the effectiveness, we investigate what types of system-environment coupling can help bring bosons into a dynamically emerged sink, and a Lindblad operator corresponding to local cooling is found to serve the purpose. Mean Field Analysis of Equilibrium Sink When the coupling constant U and sink-potential depth V are large compared to the hopping coefficient J, we can estimate how many bosons are allowed in a sink potential by ignoring the kinetic energy. For a system with N particles, the on-site energy of all particles localized in the sink is approximated by E(N ) ≈ U 2 N (N − 1) − V N . We construct another state where N out N particles are outside the sink, and its energy is approximated by E(N − N out ). The condition that the bosons are more stable to stay in the sink in this approximation is 2 Sink in continuum model in and out of equilibrium For the continuum model, the ground state and its dynamics may be studied by the Schrödinger equation for noninteracting atoms or the mean-field Gross-Pitaevskii equation (GPE) [1,2] for weakly interacting bosons, as previously implemented in modeling coherent transport [3,4]. We will analyze a simplified model where a sink corresponds to a narrow square well inside a finite box, where a dilute quantum Bose gas in the weak-interaction regime can be described by a mean-field approach [5]. At zero temperature, the condensate is described by an effective condensate wave function Φ(r, t). The evolution of the condensate wave function in an external potential V (r, t) is described by the GPE: where m is the mass of the bosonic atom and N b is the number of bosons. Here we solve the GPE with algorithms involving real-and imaginary-time propagation based on a split-step Crank-Nicolson method [6,7], and follow Ref. [8] to normalize the wavefunction with dx|Φ(x)| 2 = 1. The coupling constant U l = 4π 2 a s /m is determined by the two-body s-wave scattering length a s . The external potential V ext (x) corresponds to a narrow well and is set to simulate the equilibrium or dynamical sink. A narrow, deep trap inside an overall harmonic trap has been realized in Ref. [9], and here we idealize the situation by considering square-well potentials. The setups and their equilibrium results are shown in Fig. 1a-1e, where the system is confined in a onedimensional box with length L l , which is taken as the unit of length, and the particle number N b = 50. We consider a square well potential of depth V l and width w l L l at the center or at one edge. The reason we explore different locations of the sink is because the initial condensate wavefunction may not be uniform and the dynamics may be different. Moreover, the initial density varies with the interaction as illustrated in Fig. 1c. When presenting the results, however, we will focus on features that are not sensitive to the location of the sink. We choose a narrow width w l = 0.01L l of the sink as shown in Fig. 1. For a non-interacting Bose gas at zero temperature, the number of bound states inside a square well is determined by the width and depth [10]. For weakly interacting Bose gases with coupling constant U l = gE R Ω in equilibrium, less particles can be accommodated in the sink with larger g due to the interaction energy, but the number of particles in the sink can be increased by increasing the depth of the sink potential. Here Ω = L 3 l and E R = π 2 2 /(2mL 2 l ) is the recoil energy. In the adiabatic limit [11,12] when the change of the sink potential is infinitely slow, the state remains in the ground state and the number of particles in the sink will eventually agrees with the equilibrium case. However, the time required to approximate the adiabatic limit scales as L 2 l and hinders the scalability of the device. A similar constraint also applies to the lattice case. In the following we will focus on setups with a sudden switchon of a sink or source. To simulate a dynamically emerged sink, the potential is uniform with V l (x, t < 0) = 0 initially, then a quench to a deep sink potential leads to transport of atoms. The suddenly emerged sink, however, does not work as expected when compared to its equilibrium counterpart. Fig. 1f shows the percentage of particles flowing into a dynamically emerged sink potential at the center, and Fig. 1g shows the case for a sink at the right edge of the system. Interestingly, in neither cases the maximal fraction of particles in the sink reaches 6%. This low effectiveness of a dynamically emerged sink is a consequence of energy conservation. The ground-state energy of the initial configuration without a sink is higher than that of the final configuration with a sink because without a sink the particles spread over the whole system while with a sink most particles tend to localize inside the sink to take advantage of the low potential energy. In an isolated system such as cold atoms, there is no external dissipation to relax the system from the ground state of the initial Hamiltonian to the ground state of the final Hamiltonian after a sudden change of the potential. Similar phenomena where mismatches of energy spectra prohibit transport have been discussed in mass transport [13] and energy transport [14], and later on we will present similar results in the lattice case. For the dynamically emerged sink at the center, there are less particles flow into the sink when the interaction increases, as shown in Fig. 1f. On the other hand, more particles can flow into the sink as the interaction strength increases if it is located at the edge. This subtle difference can be understood from the density distribution of the initial ground state. As the interaction becomes stronger, the density distribution of the initial ground state without a sink becomes more flat at the center and has relatively more particles towards the edge. Thus, the density at the sink potential at the edge (center) increases (decreases), as illustrated in Fig. 1c. The ineffectiveness of a dynamically emerged sink may also be understood from the wave nature of quantum systems. It is known that when an electromagnetic wave impinges on an aperture whose diameter is much smaller than the wavelength, the transmission is severely suppressed [15]. For the atomic analogue of a sink, the particles may be viewed as matter wave whose wavelength is about the size of the whole system. The matter wave also has very low transmission into a narrow sink potential as shown in our simulations. Since the GPE is designed for weakly interacting systems, next we will model the same dynamical process of a lattice model allowing us to analyze dynamics in the strongly interacting regime. The sink potential in the lattice model corresponds to a sudden decrease of the onsite potential on a selected site. In this approximation, there is only one bound state on the sink site for a noninteracting lattice system, so this is similar to a deltafunction potential [10] in the continuum case, V ext = −V l δ(x). The delta potential only has one bound state regardless of its potential depth V l . The physics, as one will see, is qualitatively the same as a square sink potential well in the continuum case. Adiabatic Limit The general solution of the time dependent Schrödinger equation at time t can be expressed as |ψ(t) = n c n (t)ψ n (t)e iθn(t) , where θ n (t) = i t 0 E n (t )dt . By solving the Schrödinger equation, According to the adiabatic theorem [10], the system remains in the ground state if ∂H ∂t is extremely small when compared to the energy level spacing divided by the natural time unit of the system which is /E R ( /J) for the continuum (lattice) model. In the continuum model, the recoil energy is E R = π 2 2 2mL 2 l and determines the energy difference between the lowest-energy levels. The time required to reach the adiabatic limit is limited by the energy difference, so it is proportional to the square of the system size. In Fig. 2, we show the dynamics of a small non-interacting lattice system with different ramping times. For the case with a dynamic sink potential at the center and the case with one at one edge, the sink can accommodate more particles as the ramping time becomes longer. Moreover, the results show that a dynamic sink with a longer ramping time collects more particles if the sink is at the center (∼ 50% in 2d) than at one edge (∼ 5% in 2b) under the same condition. We caution that this is again related to the initial density distribution of noninteracting bosons which is higher at the center and lower at the edge. In atomtronic applications it is more realistic to consider fast switching of the elements rather than the adiabatic limit, so our main focus is on a sudden emergent (quenched) sink or source potential. As t r increases, the system approaches the adiabatic limit with more particles in the sink. Born-Markov approximations and conserved quantities In general, the theoretical framework of open quantum systems consists of a small system (labeled by "s"), which may be the finite lattice considered here, and a large environment (labeled by "e") interacting with the system. The contribution from the environment is treated as extra terms in the equation of motion of the system. Such a composite system can be achieved by submerging a lattice system into a background of bosons, and the coupling between them can introduce dissipation or coherent cooling [16]. In such a way the system can bypass the conservation of energy. Recent advances in local heating [17] and single-site cooling [18,19] further allow local manipulations to vary the energy of the system. In order to describe dynamics of open quantum systems, it is more convenient to use the total density-matrix operator ρ total of the system and the environment. Tracing over the environment degrees of freedom gives the reduced density matrix of the system. One usually assumes that initially the system and environment are independent, so ρ total = ρ s ⊗ρ e may be used as the initial condition. In general, the entire open quantum system cannot be solved explicitly due to the large degrees of freedom from the environment. A manageable description can be obtained with i) the Born approximation assuming that the frequency scale associated with the coupling between the system and environment is small comparing to the dynamical frequency scales of the system and environment, ii) the Markov approximation which requires that the coupling is time-independent over a short time scale and the environment can rapidly return to equilibrium without being altered by the coupling, and iii) the secular approximation which discards rapidly oscillating terms in the Markovian master equation. The Lindblad master equation of an operatorÔ in the Heisenberg picture [20] can be written as The observableÔ corresponds to a conserved quantity if it commutes with the Hamiltonian and Lindblad operators. One can see that by settingÔ to be the total particle number operator, it commutes with the Hamiltonian as well as the particle-number Lindblad operators and the local Lindblad operator we constructed, so the particle number is conserved in those cases.
v3-fos-license
2019-06-26T13:04:03.378Z
2019-06-24T00:00:00.000
195354301
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-019-1966-x", "pdf_hash": "264338da978df35247d69de414e146e3917091ff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1335", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "264338da978df35247d69de414e146e3917091ff", "year": 2019 }
pes2o/s2orc
Bisphosphonate use in the horse: what is good and what is not? Background Bisphosphonates (BPs) are a family of molecules characterized by two key properties: their ability to bind strongly to bone mineral and their inhibitory effects on mature osteoclasts and thus bone resorption. Chemically two groups of BPs are recognized, non-nitrogen-containing and nitrogen-containing BPs. Non-nitrogen-containing BPs incorporate into the energy pathways of the osteoclast, resulting in disrupted cellular energy metabolism leading to cytotoxic effects and osteoclast apoptosis. Nitrogen-containing BPs primarily inhibit cholesterol biosynthesis resulting in the disruption of intracellular signaling, and other cellular processes in the osteoclast. Body BPs also exert a wide range of physiologic activities beyond merely the inhibition of bone resorption. Indeed, the breadth of reported activities include inhibition of cancer cell metastases, proliferation and apoptosis in vitro. In addition, the inhibition of angiogenesis, matrix metalloproteinase activity, altered cytokine and growth factor expression, and reductions in pain have been reported. In humans, clinical BP use has transformed the treatment of both post-menopausal osteoporosis and metastatic breast and prostate cancer. However, BP use has also resulted in significant adverse events including acute-phase reactions, esophagitis, gastritis, and an association with very infrequent atypical femoral fractures (AFF) and osteonecrosis of the jaw (ONJ). Conclusion Despite the well-characterized health benefits of BP use in humans, little is known regarding the effects of BPs in the horse. In the equine setting, only non-nitrogen-containing BPs are FDA-approved primarily for the treatment of navicular syndrome. The focus here is to discuss the current understanding of the strengths and weaknesses of BPs in equine veterinary medicine and highlight the future utility of these potentially highly beneficial drugs. Background Bisphosphonates ((HO) 2 P(O)CR 1 R 2 P(O)(OH) 2 ) (BPs) are chemically stable analogues of inorganic pyrophosphate ( Fig. 1) that have been known to inhibit bone resorption since the 1960s [1,2]. Indeed, it was studies on the role of inorganic pyrophosphate in the control of soft tissue and skeletal mineralization that resulted in the discovery of inhibitors of calcification that would resist hydrolysis by alkaline phosphatase [2]. The observation that inorganic pyrophosphate and BPs could not only inhibit the growth but also the dissolution of hydroxyapatite crystals drove further study of their ability to inhibit other physiologic processes, such as osteoclastic bone resorption [1][2][3][4]. BPs can be broadly classified into two groups (nitrogen and non-nitrogen containing), based on the presence or absence of an amine group and their distinct molecular modes of action [5]. The strong affinity of the BPs for the mineral phase of bone provides molecules with the unique property of selective uptake by bone to inherently provide a high degree of tissue specificity and facilitate BP access to osteoclasts. Furthermore, BPs tend to localize at the highest bone turnover sites due to greater exposed mineral at these surfaces where they can be taken up by osteoclasts during bone turnover. Within the osteoclast, the simpler, early generation, less potent non-nitrogen containing BPs (e.g.: tiludronate and clodronate) (Fig. 1) are metabolically incorporated into non-hydrolysable analogues of ATP, which interferes with ATP-dependent intracellular pathways [2,6]. The more recently available and highly potent, nitrogen-containing BPs (such as pamidronate and zoledronate) ( Fig. 1) are not metabolized as the non-nitrogen containing BPs but selectively inhibit farnesyl diphosphate synthase (FPPS) [7,8], a key enzyme in the mevalonate/cholesterol biosynthetic pathway. In osteoclasts, disruption of tis pathway results in altered cellular processes such as ruffled border formation, critical for bone resorption [8,9]. What is the evidence for bisphosphonates efficacy in the horse? BPs are Food and Drug Administration (FDA)-approved and commonly used in the US and Europe for the prevention and treatment of osteoporosis as well as to treat other bone diseases such as Paget's disease and bone metastatic disease with remarkable efficacy in humans [10][11][12][13]. BPs significantly reduce the risk of hip or spine fractures in older women [10] and significantly improve the quality of life in patients with metastatic cancer to the bone [14]. Given the efficacy seen with the management of osteoporosis and metastatic bone disease, BP use has been explored in a myriad of other conditions. However, in the context of veterinary medicine, the primary use of BPs has been in the treatment of navicular syndrome in the horse [15,16], as well as for palliative care of tumor bone pain in the dog [17]. Currently, two non-nitrogen containing BPs are FDA-approved and widely used in the treatment of navicular syndrome Fig. 1 Clinically-used bisphosphonates. The general bisphosphonate chemical structure with potential subgroup substitutions is shown in comparison with endogenous pyrophosphate. Individual non-nitrogen bisphosphonate structures (Tiludronate and Clodronate) are shown in comparison to two of the nitrogen-containing bisphosphonate structures (Pamidronate and Zoledronate) (tiludronate and clodronate; Fig. 1). Navicular syndrome is a chronic disease affecting the podotrochlear apparatus and is considered one of the most common causes of forelimb lameness in the horse [18]. In the US, both tiludronate and clodronate are approved for the control of clinical signs associated with navicular syndrome in horses. Any other veterinary use is considered off-label, and while not illegal, other uses have not been studied by either the manufacturers or the FDA. Both drugs are also labeled specifically for use in horses over the age of 4, an age at which bone remodeling naturally slows. To date, nitrogen containing BPs are not approved for use in the horse, but there are some reports of their use [19]. In the years since the widespread approved use of tiludronate disodium and clodronate in adult horses suffering from navicular syndrome, there have been reports of additional benefits of tiludronate use including the treatment of chronic back soreness [20] and lower hock osteoarthritis [21]. BPs are used in the horse in the treatment of chronic lameness due to many different causes, presumably, in part, due to the reported analgesic effects of BPs. Although blinded, these studies had clinical signs as the primary outcome measure and do not report any changes in bone mass. Interestingly bone mass has not been measured as an endpoint in any published equine study of BP safety or efficacy [22]. One of the oft stated goals of BP treatment in the horse is an increase in bone mass and strength, the result of a reduction in osteoclastic bone resorption, as observed in humans, but this parameter is largely unmeasured or ignored in equine studies [23]. Although a difficult endpoint in the equine setting, some consideration should be given to BMD measurement or perhaps more detailed evaluation of an appropriate bone mass surrogate, such as MRI, CT or serum bone turnover markers. Indeed, some of the positive outcomes reported following BP treatment may be due to the pain-relieving or anti-inflammatory effects of BP therapy and not the efficacy of BPs to inhibit bone resorption [24][25][26]. In this light, we recently reported the results of a small equine study in which the bone turnover markers C-terminal collagen-I telopeptide (CTX-I) and osteocalcin were measured following a single clodronate injection (IM) (1.4 mg/kg). Weekly blood draw and analysis revealed no significant effects on bone turnover markers, but did appear to reduce lameness [22]. These findings are consistent with the work of others [27] that showed tiludronate and clodronate ( Fig. 1) do not appear to significantly impact bone tissue on a structural or cellular level using standard dose and administration schedules. In sum, these data support the notion that the effects of BP therapy in the horse may not be directly related to any inhibition of osteoclast activity. In another interesting experimental paradigm, unilateral cast immobilization of the horse forelimb was used assess the protective effect of tiludronate on immobilizationinduced bone loss [28]. Immobilization (disuse) increased levels of serum biomarkers of bone resorption that, as expected, were significantly reduced following tiludronate treatment at 1 mg/kg on days 0 and 28 of immobilization. Interestingly, this is one of the only studies directly demonstrating the anti-resorptive efficacy of tiludronate, or other BPs for that matter, in the horse. In general, equine-specific investigations of bone turnover and bone mass changes following BP treatment are lacking and sorely needed. That is important information, but what are the down sides? Given the rampant BP use in the equine industry, there are only a few reports demonstrating a positive effect of either BP approved for use in horses with navicular syndrome [15,16,27] and none report bone-related complications. However there is a report that documented lack of change in bone resorption following tiludronate (1 mg/kg IV) or clodronate (1.8 mg/kg IM) treatment [27] as well as a lack of any significant change in serum markers of bone turnover following clodronate (1.4 mg/ kg IM) treatment [22]. In contrast, the majority of human studies report both beneficial and not so beneficial effects of BP therapy in the treatment of postmenopausal osteoporosis and bone metastasis [9,10,12,[29][30][31][32]. The adverse events reported in humans, including an association with osteonecrosis of the jaw and perhaps the more troubling atypical fractures [33][34][35][36][37][38] may forewarn of concerns about BP use in the veterinary field. The lack of complications in veterinary BP literature could be due to the relatively low numbers of treated horses in these reports. Certainly, it was only after many years and many thousands of BP-treated human years that correlations between BP use and ONJ and AFFs were even recognized. It is important to note, it was only with the use of more potent nitrogen containing bisphosphonates that these adverse effects in small populations of patients have been observed and reported [39]. Despite these extremely rare complications, BPs remain a widely prescribed medication as BPs are proven to prevent fractures in patients with established osteoporosis or those who are at high risk of fracture. In these patients, the incidence of major complications associated with bisphosphonate use, such as ONJ and AFF, is very low [39]. It is important to place the potential negative effects of BP use alongside the advantages provided by BPs in the treatment of navicular syndrome and other disorders in veterinary medicine. There has been much to do in the equine popular press highlighting recent human case reports and small clinical series where it has been suggested that long term bisphosphonate therapy (> 5 years) may suppress normal bone remodeling to such an extent that endogenous bone healing is decreased [40]. The ruckus is based on the concern that long term BP therapy would likely result in increased fracture risk and reduced fracture healing, if replicated in the equine setting. As discussed above, human BP-associated fractures result from suppressed bone turnover and are referred to as "atypical" because they occur at sites (e.g.: subtrochanteric femur) that are not typically associated with osteoporotic fractures [41]. With regard to fracture healing, because the remodeling phases of fracture healing involve significant elevations in bone resorption [42], and BPs significantly reduce bone resorption, there is interest in the possible utility of BPs to enhance fracture healing by preventing resorption of the mineralized fracture callus [43,44]. Preclinical rodent [45], canine [46] and sheep [47] fracture repair studies provide evidence that BPs augment fracture healing resulting in stronger bone [45]. Interestingly, there are only two human clinical studies [44,48] and none in the horse that have focused on this critical question. In the HORIZON recurrent fracture clinical trial [48] no evidence of delayed fracture healing was observed when the BP (zoledronic acid; Fig. 1) treatment began within 90 days after hip fracture and no evidence of any delayed healing if treatment began within 2 weeks. More recently, the effects of early BP therapy on fracture healing and functional outcome following a fracture of the distal radius in osteoporotic patients was evaluated [49]. The fracture and bisphosphonates (FAB) trial was a double-blind, randomized, placebo-controlled trial involving 15 trauma centers across the United Kingdom that enrolled 421 bisphosphonate-naive patients aged ≥50 years with a radiographically confirmed fracture of the distal radius and randomized them in a 1:1 ratio to receive alendronate 70 mg once weekly (n = 215) or placebo (n = 206) within 14 days of the fracture. Administration of this highly potent N-containing BP did not affect fracture healing or clinical parameters [49]. Collectively, these data would contradict the anecdotal claims of many veterinary practitioners that the BPs mechanism of action disrupts the natural bone healing process. It is also possible that the potential for a catastrophic event is less likely in veterinary medicine as BP dosing is quite different. In the horse, non-N containing BPs tiludronate and clodronate (Fig. 1) are given in a single dose of 1 mg/kg IV and 1.8 mg/kg up to a maximum dose of 900 mg per horse, respectively every 3 months. In a recent human clinical trial, the same BP (clodronate) was given IM (200 mg/day for 10 days), approximately double the dose on a mg/kg basis and repeated 10-fold more for the treatment of active erosive osteoarthritis of the hand [50]. Indeed, the treating dose was even higher, since the patients also received a maintenance dose of clodronate IM (200 mg/ day for 6 days after 3 and 6 months) [50]. This study demonstrated IM safety and efficacy with a significant reduction in the use of anti-inflammatory or analgesic drugs as well as increased hand functionality [50]. In light of this expanding information, how should veterinarians use bisphosphonates in the future? Given the growing concerns regarding treatment length and potential BP side effects, it is time for the veterinary community to push for more research and controlled trials of the use of the BPs, as well as focused and appropriate laboratory studies in the veterinary space. In addition, the incorporation of the existing human clinical data into the setting of CE as a means to advancing understanding of the utility and limitations of BP is warranted. Furthermore, studies with several second generation BPs may be required, given the distinct pharmacology and multiple subclasses of BPs that appear to act differently in mammalian assays and human clinical trials [51,52]. Importantly, in view of the long half-life of BPs, it is feasible that BPs may have a significant effect on bone turnover after re-dosing, beyond the 3 monthly dose regimen currently approved in the horse. It is important to conduct additional well-designed dosing studies with appropriate bone end-points, such as imaging and serum markers of bone remodeling. Such studies are important as they may discriminate between the bone and nonbone effects of BPs and relieve concerns for adverse equine skeletal effects such as those that occur in human patients when there are significant and lasting reductions in CTX-I following BP treatment. In addition, veterinarians must consider the rationale for BP treatment. Since little evidence of changes in BMD or even bone strength changes exists following BPs in the horse, perhaps the primary utility of BP use is indeed the non-bone effect? This important distinction must be investigated. The use of BP in the horse has been complicated of late with the recent public discourse regarding the off-label use of BPs in the yearling Thoroughbred industry. While the public outcry is concerned about 'cleaning up' potentially abnormal radiographs in young Thoroughbreds or change in fracture risk as the young Thoroughbred reach training and racing age, this is not supported by laboratory animal research. Early preclinical rodent studies of clodronate and etidronate (Fig. 1) convincingly and repeatedly demonstrated effects of non N-containing BPs (in doses from 0.1 to 10 mg/kg) in young growing rats with significant reductions in long bone length due to disruptions in endochondral ossification, but no differences in the mechanical properties of bone [53][54][55]. In humans, BPs are currently used in the treatment of pediatric bone disorders such as osteogenesis imperfecta (OI) [56], where any potential consequence at the growth plate is outweighed by the obvious patient benefits.. As a result of their efficacy, BPs are being increasingly used in other scenarios ranging in severity from spontaneous disuse fractures in patients with cerebral palsy [57] to the prevention of steroid-induced osteoporosis in ambulatory children [58] as well as the prevention of bone loss in children with hypercalciuria [59]. In these cases, the beneficial effects of BPs outweigh the potential negative effects on endochondral ossification and long bone growth [60]. Importantly, the doses used are significantly greater than the doses currently approved for use in the adult horse. Certainly, in the setting of OI, cyclical BPs transiently reduce pain and improve function [61]. Doses of the N-containing BP (zoledronic acid) were 1.1 mg/kg every 3 months (ages 2-3) and patients > 3 years of age, 1.5 mg/kg/dose every 4 months (maximum dose ≤45 mg/infusion and 4.5 mg/kg/year) [61]. In these patients, pain relief occurred immediately following infusion with functional improvements observed 4 weeks later [61]. However, both pain and physical function return to pretreatment levels by the subsequent infusion, suggesting a potential non-osteoclast-mediated mechanism for improved pain relief. With regard to the apparent analgesic effects of BPs, at least in humans, the data would suggest these are more likely to be associated with N-containing BPs although little or no mechanistic understanding exists. There is a study examining the BP analgesic effect from a meta-analysis of 8595 patients enrolled in a number of BP clinical trials [62]. Twenty-two (79%) of the 28 placebocontrolled trials found no analgesic benefit for BPs. The authors conclude that N-containing BPs appear to be beneficial in preventing pain by delaying the onset of bone pain (in the oncology setting) rather than by eliciting an analgesic effect per se [62]. In contrast, others have suggested that N-containing BPs are metabolized to novel ATP analogs facilitating activation of ATP-gated P2X receptors, albeit in rat sensory neurons, as a potential analgesia mechanism [63]. On the other hand, Kim et. al. [64] compared the analgesic activity of a variety of Ncontaining and non N-containing BPs in mice. The results suggest that non N-containing BPs, not N-containing BPs, display analgesic effects at doses lower than those inhibiting bone resorption, similar to what we have reported in the horse [22]. Although the jury is still out regarding the specific mechanism(s) responsible for BP-induced analgesia, the best in vivo evidence for BP-associated analgesic effect may well be with non N-containing BP in the horse [22]. Conclusions In the horse there is currently a dearth of information regarding the effect of single and repeated doses of clodronate and tiludronate. Well-designed and appropriately powered research by non-biased researchers with germane bone parameters as outcome measures must be completed. Only with this data can horse owners and practitioners alike make informed decisions regarding the efficacy and appropriate clinical use of these potent molecules. Certainly, clients and practitioners alike require ongoing educational efforts regarding the efficacy and appropriate clinical use of these potent molecules. Following the development of a better understanding of BP effects in the horse, appropriately designed and powered placebo-controlled studies will determine to what extent beneficial BP effects on lameness are due to the inhibition of bone resorption and ascertain the details of repeat dosing in the equine setting. Such a strategy is required to ensure safer clinical use and produce a sufficient level of evidence to ensure safety. Authors' contributions AM performed the majority of the literature review and wrote the manuscript with LJS, who conceived the idea. FHE and AEW wrote and edited the manuscript extensively, with LJS. FHE also contributed the structures in Fig. 1. None of the authors' report any competing financial interests. All authors have read and approved the manuscript. Funding Our efforts in this area were supported by funds provided by Texas A&M University, College of Veterinary Medicine and Biomedical Sciences. The College funding had no role in the design of the study, collection, analysis, and interpretation of any of the literature or in writing or conceiving any aspect of the manuscript. Availability of data and materials Not applicable, no primary data presented. Ethics approval and consent to participate Not Applicable; manuscript does not report on or involve the use of any animal or human data or tissue.
v3-fos-license
2023-07-21T15:21:59.843Z
2023-07-01T00:00:00.000
260011333
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6694/15/14/3655/pdf?version=1689660664", "pdf_hash": "33326c85b3dfc45a01a230c9c9166d4ea139f158", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1339", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "9af015f1d00ac6cdba68df894070465e05db18d7", "year": 2023 }
pes2o/s2orc
Immunobiology and Cytokine Modulation of the Pediatric Brain Tumor Microenvironment: A Scoping Review Simple Summary Pediatric brain tumors are unique from adult tumors and pose challenges due to their distinct characteristics, including differences in tumor immunology, molecular profiles, and response to various treatments. Understanding these differences is crucial for developing targeted and effective therapeutic strategies. This review delves into our current understanding of the immunobiology of various pediatric brain tumors and aims to unravel the intricate interactions between the tumor microenvironment and immune system. By identifying the essential characteristics and immunobiology of pediatric tumors, we can explore strategies to leverage the immune system for novel treatment approaches. Abstract Utilizing a Scoping Review strategy in the domain of immune biology to identify immune therapeutic targets, knowledge gaps for implementing immune therapeutic strategies for pediatric brain tumors was assessed. The analysis demonstrated limited efforts to date to characterize and understand the immunological aspects of tumor biology with an over-reliance on observations from the adult glioma population. Foundational knowledge regarding the frequency and ubiquity of immune therapeutic targets is an area of unmet need along with the development of immune-competent pediatric tumor models to test therapeutics and especially combinatorial treatment. Opportunities arise in the evolution of pediatric tumor classification from histological to molecular with targeted immune therapeutics. Introduction Brain tumors are the most common solid tumors in children, with approximately 5000 new diagnosed cases per year (https://seer.cancer.gov/statfacts/html/childbrain. html accessed on 14 July 2022). Despite significant advances in surgical, radiotherapeutic, and chemotherapeutic strategies over the last several decades, brain tumors still remain the largest cause of cancer-related mortality in children [1]. Current standards of care leave patients impacted with significant long-term sequela [2]. Due to the continual need for more effective therapeutic modalities and the infiltrative nature of many pediatric brain tumors, immunotherapy represents the future horizon for adjuvant therapy. The field of cancer immunotherapy has impacted the outcomes for many adult solid cancers. New treatment options for pediatric brain tumors can significantly lag behind those Search Strategy A scoping review was conducted according to the PRISMA-SR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews) guidelines. PubMed MEDLINE, Embase, and Scopus databases were searched on 14 July 2022, using keywords, such as "brain tumor", "pediatric", and "immunology". No restrictions on the date, study type, or language were applied. Full search terms are displayed in Supplementary Table S1. The protocol was not registered. Upon review, duplicates were eliminated via automatic deduplication in Endnote X9 (Clarivate Analytics, London, UK). All remaining articles were screened by title and abstract for relevance. Articles progressing to full-text review were screened for final inclusion based on prespecified inclusion and exclusion criteria. Inclusion criteria were written in or translated into the English language, with full-text available, studying any type of pediatric brain tumor and reporting any clinical outcomes or immunological properties of pediatric brain tumors. Exclusion criteria included conference abstracts, case reports, narrative reviews, systematic reviews, and meta-analyses. A second reviewer replicated the search strategy, and disagreements were reconciled via consultation with a third reviewer. Results Using PubMed MEDLINE, Embase, and Scopus databases, 52 articles were identified in the literature related to pediatric brain tumor immune biology ( Figure 1A; Supplementary Table S1). Relevant information was gathered from selected studies, including the study design, bibliographic data, pediatric brain tumor type(s), immune-related interventions (such as immunotherapy), study outcomes, and clinical outcomes. Any discrepancies were resolved through discussion or by an additional reviewer if necessary. The extracted data were synthesized and analyzed to provide an overview of the current state of knowledge on the intersection of pediatric brain tumors and immunology, including potential immunological targets and strategies for improving patient outcomes. General Themes of Pediatric Brain Tumor Immunobiology The pediatric brain tumor microenvironment consists of an interconnected population of neurons, astrocytes, microglia, and oligodendrocytes residing within a unique extracellular matrix (ECM) [7]. Given this complexity, studies are typically focused on the following themes: (1) the general state of immunosuppression in pediatric patients (48%); (2) immune surveillance in the brain (15%); (3) the mutational burden of pediatric brain tumors (13%); and (4) differences between adult and pediatric glioma immune profiles (10%) ( Figure 1B). Differential Immune Surveillance within Pediatric Brain Tumors For diffuse midline gliomas, CD3 infiltration frequency is similar in both adults and pediatric patients, whereas CD8 expression may be greater in adults [8]. Some investigators have noted that pediatric brain tumors exhibit a less immunosuppressive tumor microenvironment when compared to adult brain tumors [9]. When investigating molecular subtypes within pediatric HGG, investigators found that immune-suppression markers were predictive of survival in only certain molecular subgroups. In K27-mutated tumors, PD-L1 and CTLA-4 confer a worse prognosis, while this effect was not detected in patients with G34-mutated gliomas. The G34-mutated tumors have been found to rely on the TGFB1 and HAVCR2 (TIM3) pathways for immune evasion [10] indicating that the underlying genetic makeup of the glioma may influence preferential dominant immune-suppressive mechanisms. One study has shown that pediatric brain tumors can be classified into five different proteomic immune signatures. The first group consists of the predominating infiltration of macrophages, microglia, and dendritic cells, epithelial-mesenchymal transition (EMT), and the presence of adenosine-mediated immune suppression. The second group is characterized by upregulation of the immune-suppressive glutamate signaling pathway. These groups include a mixture of low-grade gliomas (LGGs) and HGGs. The third group, consisting mostly of craniopharyngiomas, was characterized by an increase in EMT, CTLA-4, and PD-1 expression. The final two groups consist of low levels of immune infiltration and exhibited the upregulation of WNT signaling [11]. In an analysis spanning across brain tumor types, such as PA, ependymoma (EPN), glioblastoma (GBM), and medulloblastoma (MB), PA and EPN had a higher frequency of infiltrating immune cells, including activated myeloid cells, compared to GBM, MB, or normal tissue [12]. One group has reported that LGGs had higher T cell density when compared to HGGs. However, within LGG, T cell infiltration was dependent on the cancer lineage with pleomorphic xanthoastrocytoma (PXA) and ganglioglioma containing relatively higher T cell densities [13]. Another study found that PXA had significantly higher CD8+ T cell infiltration than gangliogliomas [14]. A comparison between GBM and MB showed that the latter had a particularly low amount of tumor immune cell infiltration [15]. These data indicate that each type of brain tumor is associated with varying tumor-infiltrating lymphocyte levels [16,17]. In a cohort of MB, PNET, and astrocytomas, 76% of these tumors were positive for CD8+ T cells, 85% contained CD4+ T cells, and 97% contained macrophages, but these constituted only 1-10% of the total cells [18]. DIPG patients have decreased NK cells and increased B cells in the peripheral blood when compared to control blood samples, suggesting that these two immune cell populations may be differentially trafficked during DIPG growth [19]. Notably, even within specific glioma pathologies, such as PA, in which there is robust immune infiltration, this can be highly variable [20]. There are likely multiple factors contributing to the degree and types of immune infiltration throughout the tumor microenvironment (TME) that include but are not exclusive to the following: (1) the presence of an immunogenic antigen; (2) immune chemokine expression; (3) the disruption of the blood-brain barrier; (4) genetic and epigenetic features; and the (5) types of immune suppression that predominant within a given malignancy. Notably, the presence of immune cells within the TME does not imply functionality because they can be either anergic or exhausted. As such, future studies should include functional assessments. Although these studies effectively convey the challenge of heterogeneity within these types and subtypes of tumors, there is still a marked need for further granularity on the functions and states. It is still unclear how various immune cells are distributed across the tumor and TME and whether their roles in promoting or suppressing the tumor are conserved across different regions. Given this, novel approaches are required to gather data beyond simply the quantification of various immune cells in each tumor. Instead, targeted investigations of the immune cell distribution and signatures across various locations within the TME are needed to fully understand the role of these cells and how they can be best manipulated to generate an effective anti-tumor immune response. The Role of Immune Suppression in Pediatric Brain Tumors It has been postulated that pediatric brain tumors may actually be less immunosuppressive than adult brain tumors, suggesting that some adult therapies that may not have been successful should not be discarded as viable treatment options in children and adolescents [9]. A key distinguishing factor between adult and pediatric immunobiology is the generally immature state of pediatric immune systems. The transition from fetal to postnatal life requires an associated transition from immunosuppression to immunological responsiveness [21]. Given the evolving state of the pediatric immune system, the effects of these factors on brain tumor immune profiles between children and adults also await further in-depth studies. Studies conducted thus far have detailed mechanisms that are known to play a role in neonatal immunity within the brain, such as increased Th2 CD4+ cells, reduced CD8+ T cells and interferon (IFN)-gamma responses, immaturity of dendritic cells (DC), and increased IL-10 secretion by antigen-presenting cells. None, however, have described the potential effects of these cells on brain tumor growth [22][23][24][25]. Immunosuppressive mechanisms, such as adenosine and myeloid-derived suppressor cells (MDSCs), increase immediately after birth, contributing to the tolerogenic immune status of pediatric patients. However, these mechanisms have mostly been studied in adult brain tumors [26][27][28][29]. One study investigated the systemic immune profiles of children with brain tumors to understand if there was a common pattern. Only MB patients had a unique serum cytokine profile characterized by high VEGFA and IL-7 and low IL-17A and TNF-β [30]. Unique to the CNS are microglia, which are macrophage-like cells that reside in the brain parenchyma [31]. Throughout brain development, microglia regulate a number of immune chemokines, such as CXCL12 and CXCR4 [32], which could create a specific microenvironment for tumor cells to emerge [33]. Although studies have outlined the plasticity of microglia in certain diseases, the identification of specific pro-tumor subtypes of microglia, as well as mechanisms by which these subtypes arise to promote brain cancer, has not been studied extensively in children [34]. Pediatric Brain Tumors Typically Have a Very Low Mutational Burden Most pediatric brain tumors have a low tumor mutational burden (TMB) [35]. Despite this, there tends to be a higher occurrence of epigenetic changes compared to adult brain tumors. This distinction can be attributed to the following reasons: (1) certain tumors, like MB, originate during embryonic development; (2) developmental pathways in children may experience dysregulation, contributing to tumor growth; and (3) children have a shorter duration of exposure to environmental factors that act as carcinogens [36]. In the comprehensive genomic profiling of 723 pediatric brain tumors, low TMB was present in 92% of cases [37]. This observation also holds for other rare cancers, such as malignant rhabdoid tumors [38]. Based on grade, both LGG and HGG pediatric brain tumors have a low TMB; however, 6% of HGGs are hypermutated with greater than 20 mutations/Mb [39]. This low TMB presents a unique challenge for immunotherapeutic treatment. Some immunotherapies rely on targeting a specific tumor antigen. Since these develop from mutations, the lower TMB means that these antigenic targets are less prevalent in pediatric brain tumors and may confer a decreased chance of a response to some immune therapies [40]. This has been further corroborated by an analysis of tumor antigen precursor protein profiles, in which adult gliomas expressed 94%, whereas only 55-74% of pediatric gliomas did so [41]. Some pediatric tumors, such as MB, have more antigens and NK infiltration, which positively correlate with prognosis, suggesting that these tumors may be more easily targeted with immune therapeutics [42][43][44]. In adult gliomas, a low TMB may not be a biomarker of a response to some immunotherapies, such as virotherapy [45]. Although virotherapy has been evaluated in pediatric glioma trials, it is unclear if there is a therapeutic benefit [46,47]. Immunoediting as a Framework for Pediatric Brain Tumor Immunology The concept of "tumor immunoediting" has been widely used as a framework for understanding how tumors still develop despite the myriad of immune responses that they elicit ( Figure 2). While our understanding of pediatric brain tumors remains limited, following this framework allows for a systematic approach in investigating the complex immunological interactions occurring within the developing pediatric brain that may lead to potential immunogenicity and the immune-evasion mechanisms of these tumors. Although significant challenges exist in comprehending the intricate dynamics of pediatric brain tumor immunology, the adoption of the immunoediting concept offers a robust foundation advancing innovative therapeutic strategies in each different phase of this framework. The general cancer immunoediting concept has three major phases by which innate and adaptive immune cells respond to tumor cells: (1) elimination; (2) equilibrium; and (3) escape [48][49][50]. In the elimination phase, tumor cells that escape the non-immune mechanisms of tumor suppression are recognized and eliminated by innate immune cells. The incomplete eradication of these tumor cells drives the next phase, equilibrium. During this phase, tumor cells enter a state of dormancy, where they can evolve to become immunosuppressive or modulate tumor-specific antigens to escape immune recognition. Through this, there is a balance between the recognition of tumor cells by the adaptive immune system and tumor mutations. Finally, during the third phase, escape, the multitude of tumor mutations caused by immune pressure selects for an immune-resistant tumor phenotype. The tumor is then able to generate an immunosuppressive environment by escaping recognition by anti-tumor immune cells and generating pro-tumor immune responses [48][49][50]. low TMB; however, 6% of HGGs are hypermutated with greater than 20 mutations/Mb [39]. This low TMB presents a unique challenge for immunotherapeutic treatment. Some immunotherapies rely on targeting a specific tumor antigen. Since these develop from mutations, the lower TMB means that these antigenic targets are less prevalent in pediatric brain tumors and may confer a decreased chance of a response to some immune therapies [40]. This has been further corroborated by an analysis of tumor antigen precursor protein profiles, in which adult gliomas expressed 94%, whereas only 55-74% of pediatric gliomas did so [41]. Some pediatric tumors, such as MB, have more antigens and NK infiltration, which positively correlate with prognosis, suggesting that these tumors may be more easily targeted with immune therapeutics [42][43][44]. In adult gliomas, a low TMB may not be a biomarker of a response to some immunotherapies, such as virotherapy [45]. Although virotherapy has been evaluated in pediatric glioma trials, it is unclear if there is a therapeutic benefit [46,47]. Immunoediting as a Framework for Pediatric Brain Tumor Immunology The concept of "tumor immunoediting" has been widely used as a framework for understanding how tumors still develop despite the myriad of immune responses that they elicit (Figure 2). While our understanding of pediatric brain tumors remains limited, following this framework allows for a systematic approach in investigating the complex immunological interactions occurring within the developing pediatric brain that may lead to potential immunogenicity and the immune-evasion mechanisms of these tumors. Although significant challenges exist in comprehending the intricate dynamics of pediatric brain tumor immunology, the adoption of the immunoediting concept offers a robust foundation advancing innovative therapeutic strategies in each different phase of this framework. The general cancer immunoediting concept has three major phases by which innate and adaptive immune cells respond to tumor cells: (1) elimination; (2) equilibrium; and (3) escape [48][49][50]. In the elimination phase, tumor cells that escape the non-immune mechanisms of tumor suppression are recognized and eliminated by innate immune cells. The incomplete eradication of these tumor cells drives the next phase, equilibrium. During this phase, tumor cells enter a state of dormancy, where they can evolve to become immunosuppressive or modulate tumor-specific antigens to escape immune recognition. Through this, there is a balance between the recognition of tumor cells by the adaptive immune system and tumor mutations. Finally, during the third phase, escape, the multitude of tumor mutations caused by immune pressure selects for an immune-resistant tumor phenotype. The tumor is then able to generate an immunosuppressive environment by escaping recognition by anti-tumor immune cells and generating pro-tumor immune responses [48][49][50]. The first step of the elimination phase relies heavily on the innate immune system recognizing tumor cells through the recognition of damage-associated molecular patterns (DAMPs), rather than tumor-specific antigens [50,51]. DAMPs in brain tumors include uric acid, heat-shock proteins, ligand transfer molecules induced by CpG DNA, and extracellular matrix derivatives that serve as ligands for toll-like receptors (TLR). The recognition of these DAMPs causes the activation of pro-inflammatory responses, the maturation of dendritic cells, T cell antigen presentation, and the release of pro-inflammatory signals, which lead to the recruitment of immune cells that recognize tumor cells and release IFN-γ [4,52]. TLR7 expression has been found to be a prognostic factor in MB patient survival [53]. The predominate cells participating in this activity are thought to likely be the CNS-resident microglia. There is selective enrichment of microglia/macrophage-related genes in pediatric HGG of the mesenchymal subtype [54]. Although these studies reveal the importance of a heightened TLR response in immune cells to recognize and eliminate tumor cells, TLRs are also expressed on brain tumor cells, thereby playing dual roles in eliciting anti-tumoral and pro-tumoral responses [55]. Some clinical trials have suggested signals of a clinical response using TLR agonists [56]. However, others have shown that the inhibition of pro-tumor TLR signaling may suppress glioma growth [57,58]. Although there has been a full characterization of the different TLRs and their pro-tumor or anti-tumor roles among various adult brain tumors, TLR expression in pediatric brain tumor types is still being studied [56]. Despite the limited number of studies on TLR immunotherapy against brain tumors, the use of TLR agonists in conjunction with immune checkpoint inhibitors has been shown to effectively induce cytotoxic T cell responses to suppress cancers [59]. The release of IFN-γ from these innate immune cells results in limited killing of the tumor cells through various anti-proliferative, anti-angiogenic, and apoptotic effects. As immature dendritic cells start to mature, antigen-loaded dendritic cells migrate to the cervical lymph nodes, where they present tumor antigens to CD4+ and CD8+ T cells. There, presentation to naive T cells allows for the differentiation, maturation, and clonal expansion [4,52]. Thereafter, the immune effector cells migrate and infiltrate the tumor. An antigen-presenting event within the tumor is likely important for immunological recognition of the cancer [16,17]. During the equilibrium stage, there is antigen modulation ultimately resulting in antigen loss, which has been described in several clinical trials [60,61] and low levels of immunogenicity mediated by the lack of MHC expression and co-stimulatory molecules that trigger immunological anergy. During immune escape, immunogenic tumor cells are no longer detected by the immune system because of the following: (1) the loss of immunogenic antigens [62,63]; and (2) downregulation of the expression of MHC [64][65][66], or the process of antigen presentation is downregulated through the cGAS/STING pathway [67]. Specifically, in pediatric brain tumors, the downregulation of MHC-I and CD1d has been documented [68,69]. During the escape phase, the TME of pediatric brain tumor patients is notable for the marked immunosuppressive cytokines. In MB, a mutated mTOR pathway leads to increased IDO1 expression [70], which triggers the expansion of regulatory T cells enabling tumor cells to grow [71]. IDO inhibitors have been tested in a wide variety of oncological indications, including adult glioblastoma, but the Phase II results have not yet been released [72]. Glutamate has been noted to be aberrantly expressed in multiple cancers [73] and can promote immune-evasion mechanisms via immunosuppressive cytokine production in certain types of pediatric brain tumors [74]. Adenosine is another immune-suppressive pathway that can be operational in these tumors [75,76]. TGF-β has been extensively documented as being immunosuppressive in the TME of adult gliomas, but also in MB [77,78]. When TGF-β signaling is blocked, regulatory T cells are decreased and there is an increased capacity of CD8+ T cells to carry out cytotoxic functions [77]. TGF-β also functions in MB to antagonize NK anti-tumor functions [79], which can be therapeutically manipulated for anti-glioblastoma activity [80]. Tumor-derived exosomes (TEXs) can also mediate immune suppression in pediatric brain tumors [81]. MB-secreted TEXs have been shown to inhibit IFN secretion from T cells in a dose-dependent manner [82]. However, these TEXs can also be immune-stimulatory [83], so the key determinant of immune modulation is likely dependent on the exosomal content. Monocytes can become polarized to support tumor growth and suppress the antitumor immune response. Pro-tumor-polarized tumor-associated macrophages (TAMs) in pediatric brain tumors have been identified in a wide-variety of pediatric brain tumors [84]. This polarization may occur through the nuclear factor-kappa B (NF-κB) pathway, which was found to be increased in posterior fossa group A (PFA) ependymomas [85]. The NF-κB complex promotes the transcription of many cytokines, including IL-6, which, when chronically secreted, maintains the immunosuppressive environment thorough the polarization of infiltrating monocytes [85,86]. Platelet-derived growth factor subunit B (PDGFB) has also been implicated as playing a role in immunosuppressive macrophages in pediatric HGG, in which shorter murine median survival was accompanied by an increase in the TAM infiltration of PDGFB-driven tumors [87]. The mechanism of TAM recruitment and differentiation in pediatric craniopharyngiomas may be attributed to both the IL-8 and IL-6 cytokines [88]. IL-8 promotes myeloid-derived suppressor cell recruitment to the tumor, while IL-6 activates the JAK/STAT pathway-a key hub of tumor-mediated immune suppression [89]. STAT3 inhibitors are in clinical trials for pediatric patients with brain tumors (NCT04334863). The expression of immune-suppressive macrophages within MB is markedly heterogeneous, but there could be enrichment within the SHH-group [90]. Research into the functions of TAMs across subtypes of MB will be needed before the implementation of therapeutics targeting this population since the TAM may also have anti-tumor properties [91]. Nonetheless, in clinical trials for adult brain tumors, targeting TAM activity, recruitment, and/or polarization may be beneficial especially when used in combination with immune checkpoint inhibitors [92]. Under normal physiological conditions, the activity of T cells is controlled, in part, through interactions between programmed cell death-1 (PD-1) and programmed deathligand 1 (PD-L1) [93]. Targeting the PD-1/PD-L1 axis has revolutionized the field of cancer immunotherapy [94], especially for cancers that have a high TMB and immune infiltration [95]. Although the expression of this immune-suppressive axis is common in many solid cancers, the expression in pediatric tumors is less common [96,97] and can be markedly heterogeneous [98]. In supratentorial extra-ventricular ependymomas, PD-L1 expression was a negative prognosticator for progression-free survival [99]. Ependymoma that harbored a RELA fusion (ST-RELA) had high PD-L1 expression on both the tumor cells and myeloid cells. T cell exhaustion was confirmed through PD-1 detection on both CD4+ and CD8+ T cells, as well as their inability to secrete IFNγ upon stimulation [100]. Other profiling initiatives in pediatric CNS tumors [101,102] indicate that the infrequent expression of the PD-1/PD-L1 axis is not the sole mechanism for the generation of T cell exhaustion and that other mechanisms of immune suppression are operational. As such, the use of this type of immunotherapy strategy likely needs to be considered in select subsets of these patients. Evolution of Pediatric Brain Tumor Classification In recent years, the pathologic classification of both adult and pediatric brain tumors has increasingly used genetic, cytogenetic, and epigenetic (i.e., DNA methylation profiling) features. Histology still provides useful information but has limitations. Two tumors with a similar morphology may have completely different biology and behavior and require completely different therapies. The 2021 WHO CNS tumor classification system is based on integrated molecular diagnostics, combining relevant morphologic and molecular features to identify tumors and determine key prognostic and predictive information. Distinct molecular groups of MB have been recognized since at least 2012 [108], and an integrated molecular diagnostic approach was recommended in the 2016 WHO CNS tumor classification system [109]. The 2021 system takes a similar approach for pediatric-type gliomas. For example, the mutation in histone H3 at G34 is a well-known molecular driver in pediatrictype high-grade hemispheric gliomas [110,111]. This mutation is now a formal diagnostic criterion for the tumor class [112]. The WHO has also defined several new classes of pediatric-type gliomas, based on molecular drivers. These include diffuse astrocytoma, MYB-or MYBL1-altered; diffuse low-grade glioma, MAPK pathway-altered; polymorphous low-grade neuroepithelial tumor of the young (BRAF mutations and FGFR2 fusions); and infant-type hemispheric glioma (alterations in NTRK, ROS1, ALK, or MET) [112,113]. Furthermore, DNA methylation profiling [114,115] has been incorporated into the diagnostic criteria for many types of adult and pediatric brain tumors. In some instances, DNA methylation data can make distinctions that are otherwise difficult to ascertain, such as between PF groups A and B in ependymoma [112,116]. Because of the evolution of the aforementioned pathology changes, the analysis of immune biology as a function of molecular drivers is eminently emerging. These changes in characterization will likely drive patient selection for future clinical trials, including Basket Trials. As opposed to enrolling patients with a given cancer-cell-lineage diagnosis, molecular driver classification will likely inform selection and/or stratification. The Paucity of Pre-Clinical Pediatric Brain Tumor Models Currently utilized pediatric brain tumor models include genetically engineered murine models (GEMMs) and patient-derived xenografts (PDX) that mostly consist of HGG and medulloblastomas. Several biobanks exist with established PDX models that are available upon request. Tumor cell lines available through these resources have accompanying histopathology, whole-genome sequencing, RNA-sequencing, and DNA array sets providing researchers with a characterization for the selection of cell lines. While these biobanks have expanded the available pediatric brain tumor models, there are still limitations for other cancer lineages, including clinical and molecular annotation within these repositories. The major limitation with PDX models is the immune-incompetent background that hampers the ability to investigate the immune composition and interaction with the tumor and to screen and test immunotherapies. Recent developments of humanized mice that are reconstituted with human bone marrow provide a new avenue to test immune therapies, but these models are associated with a high cost, preventing large-scale drug testing, and it is still unknown how well these mice recapitulate the human biology. GEMMs can overcome some of these inherent limitations of PDX models since they have a native immune system and intact microenvironments. GEMMs enable researchers to study the early stages of tumor initiation, which is particularly relevant for pediatric brain tumors, since the source of embryonal tumors is still unknown and may arise from embryonic or fetal cells remaining in the CNS. GEMMs are created by mutating key pathways, such as EGFR, PDGF, NF1, and TRP53, which are altered in human gliomas [117,118]. These models have inherent limitations, including the maintenance of breeder colonies. Additionally, gliomas have alterations in several pathways, and choosing the correct GEMM can be difficult. Finally, these models may have the correct mutations and genetic alterations; however, several overlapping yet distinct tumor types may share similar alterations. For example, MAPK-driven pediatrics encompass several cancer lineages with unique characteristics that will likely require different treatment modalities. Overview of Findings, Future Perspectives, and Implementations There are a number of challenges in the development of immunotherapy for pediatric brain tumors (Figure 3). The recent successes in adult tumor immunotherapy for solid cancers has revealed how the understanding of immunobiology may lead to effective treatment options. In order to achieve this in pediatric patients, it will first be necessary to acquire a better understanding of the pediatric brain tumor immune landscape. [117,118]. These models have inherent limitations, including the maintenance of breeder colonies. Additionally, gliomas have alterations in several pathways, and choosing the correct GEMM can be difficult. Finally, these models may have the correct mutations and genetic alterations; however, several overlapping yet distinct tumor types may share similar alterations. For example, MAPK-driven pediatrics encompass several cancer lineages with unique characteristics that will likely require different treatment modalities. Overview of Findings, Future Perspectives, and Implementations There are a number of challenges in the development of immunotherapy for pediatric brain tumors (Figure 3). The recent successes in adult tumor immunotherapy for solid cancers has revealed how the understanding of immunobiology may lead to effective treatment options. In order to achieve this in pediatric patients, it will first be necessary to acquire a better understanding of the pediatric brain tumor immune landscape. This review synthesizes what is currently known about various pediatric brain tumors in an attempt to shed light onto the potential efficacy of certain treatment modalities. For diffuse midline gliomas, studies have shown that these tumors exhibit similarities in the CD3 infiltration frequency between adults and pediatric patients but that the expression of CD8 is greater in adults, suggesting potential differences in the immune response. Pediatric brain tumors, including DMG, have also been observed to have a less immunosuppressive tumor microenvironment compared to adult brain tumors, providing a unique opportunity for targeted immunotherapies. Thus, developing strategies that enhance CD8+ T cell responses and overcome immune-suppression mechanisms involved in DMG-such as TGF-β and IL-10-could significantly improve treatment outcomes. As for LGGs, they have been found to exhibit a higher T cell density compared to high-grade gliomas. The differences in T cell infiltration between LGG subtypes, such as PXA and ganglioglioma, suggest the need for subtype-specific approaches. Targeting specific immune cell populations and enhancing T cell responses tailored to individual LGG subtypes could be explored for more effective treatments. Furthermore, other immune evasion mechanisms in LGG include the upregulation of immune checkpoint molecules, such as PD-L1 and CTLA-4, within the tumor microenvironment. Thus, combining immune checkpoint inhibitors with strategies that enhance T cell activation, such as immune- This review synthesizes what is currently known about various pediatric brain tumors in an attempt to shed light onto the potential efficacy of certain treatment modalities. For diffuse midline gliomas, studies have shown that these tumors exhibit similarities in the CD3 infiltration frequency between adults and pediatric patients but that the expression of CD8 is greater in adults, suggesting potential differences in the immune response. Pediatric brain tumors, including DMG, have also been observed to have a less immunosuppressive tumor microenvironment compared to adult brain tumors, providing a unique opportunity for targeted immunotherapies. Thus, developing strategies that enhance CD8+ T cell responses and overcome immune-suppression mechanisms involved in DMG-such as TGF-β and IL-10-could significantly improve treatment outcomes. As for LGGs, they have been found to exhibit a higher T cell density compared to highgrade gliomas. The differences in T cell infiltration between LGG subtypes, such as PXA and ganglioglioma, suggest the need for subtype-specific approaches. Targeting specific immune cell populations and enhancing T cell responses tailored to individual LGG subtypes could be explored for more effective treatments. Furthermore, other immune evasion mechanisms in LGG include the upregulation of immune checkpoint molecules, such as PD-L1 and CTLA-4, within the tumor microenvironment. Thus, combining immune checkpoint inhibitors with strategies that enhance T cell activation, such as immune-stimulatory cytokines or adoptive T cell therapy, may improve anti-tumor immune responses in LGG. GBM has been shown to exhibit low tumor immune cell infiltration compared to other tumor types, such as MB. The higher presence of antigens and natural killer (NK) cell infiltration in MB, along with their positive correlation with prognosis, suggests the potential of NK cell-based therapies. Developing strategies that boost NK cell functions and antigen presentation in GBM could enhance immune responses against these tumors. Additionally, other immune evasion mechanisms in GBM and MB include the upregulation of immunosuppressive molecules, such as IDO. Targeting these molecules with specific inhibitors or combining immune checkpoint inhibitors with NK cell-based therapies may improve the efficacy of immunotherapies in GBM and MB. For ependymomas, the higher frequency of infiltrating immune cells, including activated myeloid cells, in EPN compared to other tumor types indicates the possibility of targeting these immune cell populations. However, the further characterization of immune cell populations in EPN is necessary to understand their functional significance and potential immune-evasion mechanisms. In addition to myeloid cells, the presence of TAMs and Tregs in the EPN microenvironment suggests potential immunosuppressive roles. Strategies that modulate these immune cell populations, such as TAM-targeting therapies or Treg depletion, could enhance anti-tumor immune responses in EPN. The unique immune signature observed in craniopharyngiomas, characterized by an increase in epithelial-mesenchymal transition, CTLA-4, and PD-1 expression, suggests the potential for targeting these immune checkpoints. Along with this, the activation of the Wnt/β-catenin pathway, commonly found in craniopharyngiomas, may also majorly contribute to immune evasion, implicating the importance of this pathway as a target in combination with immune checkpoint inhibitors. Ultimately, despite the abundance of articles on tumor immunology and immunobiology, very little has been studied in the domain of pediatric brain tumors. Understanding the immunobiology of pediatric brain tumors, including the lower tumor mutational burden and the immunoediting process, is crucial for developing these effective therapeutic strategies. Further research into the mechanisms driving immune suppression, immune escape, and immune resistance within these tumor types will be essential for discovering novel drug targets and designing combination therapies that can overcome these challenges. Additionally, exploring the role of epigenetic changes in tumor growth may uncover new avenues for targeted therapies in pediatric brain tumors. More specifically, some reports have quantified various immune lineages but typically have not delved into the distribution across the TME, especially as a function of regions of BBB permeability and tumor genetic heterogeneity. Given the variety of pro-tumor and anti-tumor roles of the various immune cell populations, the general quantification of immune cells is insufficient. Rather, more systematic and extensive profiling is needed before prioritizing the available therapeutic compendium. Efforts directed at single-cell RNA sequencing of the immune compartment along with spatially resolved transcriptomic sequencing would more effectively characterize the heterogeneous distribution of various immune cells across the TME, while also capturing relevant functional signatures. Another major hurdle is the lack of pre-clinical models in pediatric brain tumors, making potential immunotherapies difficult to evaluate. Given this, greater efforts are also needed in developing immune-competent pre-clinical models suitable for translational studies to address the unmet need for adequate therapeutic options in pediatric brain tumor patients. Key Areas for Future Investigation Determine the most frequent immune-modulatory targets that are specific to pediatric gliomas to prioritize the available therapeutic compendium. Consider treatment strategies based on molecular alignment as opposed to the histological diagnosis. Develop low-grade glioma models in which therapeutic modalities can be rigorously tested.
v3-fos-license
2023-09-01T06:16:13.083Z
2023-08-31T00:00:00.000
261396154
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10067-023-06739-w.pdf", "pdf_hash": "dac47c881c49b351470523ca09536988603b3d3d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1340", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3e99d55a75256431318e0d8ebe0ffcc9b4cf4b72", "year": 2023 }
pes2o/s2orc
Diagnostic delay in patients with giant cell arteritis: results of a fast-track clinic Giant cell arteritis (GCA) can lead to severe complications if left untreated. The aim of this study was to describe time from onset of symptoms to diagnosis and treatment in GCA suspected patients in a fast-track clinic (FTC), and secondarily to assess the influence of GCA symptoms on this time. A retrospective cohort consisting of suspected GCA patients who visited the FTC between January 2017 and October 2019 was used. Time between symptom onset, first general practitioner visit, FTC referral, first FTC visit, and treatment initiation was analysed. Furthermore, this was stratified for subtypes of GCA and GCA symptoms. Of 205 patients referred with suspected GCA, 61 patients received a final diagnosis of GCA (GCA+) and 144 patients had no GCA (GCA−). Median time after onset of symptoms to first FTC visit was 31.0 days (IQR 13.0–108.8) in all referred patients. Time between onset of symptoms and first GP visit was 10.5 (4.0–36.3) days, and time between first GP visit and FTC referral was 10.0 (1.0–47.5) days. Patients were generally seen at the FTC within 1 day after referral. For patients with isolated cranial GCA (n = 41), median delay from onset of symptoms to treatment initiation was 21.0 days (11.0–73.5), while this was 57.0 days (33.0–105.0) in patients with extracranial large-vessel involvement (n = 20) (p = 0.02). Our results indicate considerable delay between symptom onset and FTC referral in patients suspected of GCA. Suspected patients were examined and GCA+ patients were treated instantly after referral. Key Points • GCA can cause severe complications with delayed treatment, but non-specific symptoms make diagnosis challenging. • Diagnostic delay still occurs despite introducing a successful fast-track clinic resulting from delay between start of symptoms and FTC referral. • Patients who presented with constitutional symptoms had longer delay than patients who presented with isolated cranial symptoms. Introduction Giant cell arteritis (GCA) is the most common systemic vasculitis and can lead to severe complications when left untreated.GCA mostly occurs in patients over 50 years, peaking between ages 70 and 80 [1].The incidence is relatively low (10 per 100,000 person-years) [2].GCA is classically known for inflammation of cranial arteries (cranial GCA (C-GCA)), including the temporal arteries; however, its disease spectrum also includes an extracranial phenotype referred to as large-vessel GCA (LV-GCA) [3,4].Inflammation-induced ischemia may cause new headache, visual symptoms, jaw claudication, and scalp tenderness; however, more general and atypical symptoms are also common [4]. 1 3 GCA is a medical emergency as it can cause severe complications such as stroke, permanent vision loss, and aneurysms.Complications can arise within days and can be prevented with timely glucocorticoid (GC) treatment [5,6].This emphasizes the importance of preventing delay through early recognition, diagnosis, and treatment of GCA [7], which is challenging due to frequently occurring nonspecific features [8,9].Traditionally, a temporal artery biopsy (TAB) is performed to diagnose GCA; however, this method has a low sensitivity, is invasive, and results are not instantly available.Even though treatment is often initiated before TAB results are available, unnecessary treatment should be avoided due to serious GC side effects.In recent years, rheumatology GCA fast-track clinics (FTCs) are introduced globally, contributing to a rapid diagnostic work-up after FTC referral including ultrasound (US) assessment of suspected GCA patients [10]. Despite FTC introduction, timely diagnosis remains challenging and diagnostic delay is not uncommon [11].In a meta-analysis by Prior et al. [12] diagnostic delay was described; however, studies included in this meta-analysis did not primarily focus on diagnostic delay, only patients with GCA were described, and symptom duration before diagnosis was not studied.Therefore, more detailed information on reasons for delay in GCA suspected patients seen in an FTC is needed.The present study primarily investigates the time between onset of symptoms and first FTC visit in GCA suspected patients in a Dutch GCA FTC, and time to treatment initiation for those diagnosed with GCA.The study secondarily investigates the influence of GCA symptoms on time to first FTC visit in GCA suspected patients in an attempt to identify patients at risk for delay. Design and subjects This retrospective cohort study was conducted at the GCA FTC of the rheumatology outpatient department in Ziekenhuisgroep Twente (Hospital Group Twente), the Netherlands.Patients older than 50 years old suspected of GCA who first presented to the FTC from January 1, 2017 to October 1, 2019 were included.The study was conducted in accordance to the Helsinki code.The study protocol was approved by the METC Twente and was considered as not subject to the Medical Research Involving Human Subjects Act.Informed consent was waived by the METC Twente because of its retrospective nature.Clinical diagnosis at baseline by the treating rheumatologist with verification after 6 months was used to distinguish between patients with GCA (GCA+) and patients without GCA (GCA−). Data collection The following data were collected: patient demographics, symptoms and clinical examination, laboratory parameters, results of diagnostic imaging and TAB if available, and dates regarding onset of symptoms, first visits to a general practitioner (GP) and/or specialists, referral to the FTC by a GP or another specialist, first FTC visit, and treatment initiation.Dates were collected using patient records and GP referral letters stored in electronic patient records.Generally, GCA indicated that treatment is not started by GPs in the Dutch healthcare system.Clinical GCA diagnosis was based on a combination of symptoms, clinical examination, inflammatory markers, and results of additional diagnostic imaging and/or TAB.GCA phenotype was established as C-GCA, LV-GCA, or overlapping C/LV-GCA.Patient data were collected from electronic health records.Castor study management system (Ciwit B.V., The Netherlands, version 2020.2.24) was used for data management. Definitions and time periods Time periods between onset of symptoms, first GP visit, FTC referral, and first FTC visit were assessed.Time from onset of symptoms to treatment initiation could only be determined in GCA+ patients as GCA− patients did not receive GCA indicated treatment.This time period was defined as total delay.Cranial symptoms were defined as headache, jaw claudication, and/or scalp tenderness with or without visual symptoms [13].Non-specific constitutional symptoms were defined as fever, weight loss, and/or fatigue [13]. Statistical analysis Mean values with standard deviation (SD) were used for normally distributed continuous variables, median values with interquartile ranges (IQR) for non-normally distributed variables.To compare independent groups, a Chi-square test, independent T-test, or Mann-Whitney U test was used when appropriate.A Kruskal-Wallis test and post hoc Mann-Whitney U tests with Holm-Bonferroni correction for multiple testing were performed to compare non-normally distributed variables amongst more than two independent groups.For each time period a complete case analysis was performed.A p-value of <0.05 was considered statistically significant.Statistical analyses were carried out in SPSS Inc. (Chicago, IL), version 24. Patient and public involvement statement Patients were not involved in the design of this retrospective study. Baseline characteristics In total, 205 patients with suspected GCA who visited the FTC between January 2017 and October 2019 were eligible for inclusion.In the total study population (n = 205), the mean age was 71.3 years (SD 10.7), and 55.1% (n = 113) were female.Baseline characteristics are summarized in Table 1.In the total study population, 29.8% (n = 61) patients were diagnosed with GCA (GCA+) and 70.2% (n = 144) patients were not diagnosed with GCA (GCA−) by the treating rheumatologist.In GCA+ patients, 67.2% (n = 41) had C-GCA, 13.1% (n = 8) had LV-GCA, and 19.7% (n = 12) had overlapping C/LV-GCA based on diagnosis by the treating physician.Diagnosis was confirmed and consistent with baseline diagnosis after 6 months in 58/61 GCA+ patients.Three GCA+ patients were deceased at 6 months (not GCA related).For these patients, a last observation carried forward approach was used. Time to FTC visit in patients suspected of GCA Table 2A describes different components between onset of symptoms to first FTC visit in the total study population (n = 205).Median time between onset of symptoms and first consultation with a GP was 10.5 days (IQR 4.0-36.3).Median time between first GP visit and FTC referral was 10.0 days (IQR 1.0-47.5).In 98 patients, the GP referred directly to the FTC while 90 patients were first referred to a specialist other than a rheumatologist (Fig. 1).When directly referred to the FTC by a GP, median time between first GP visit and FTC referral was 9.5 days (IQR 0.8-51.3),and when referred to the FTC via another specialist this was 12.5 days (IQR 2.0-32.8)(p = 0.921).Patients were generally seen at the FTC within 1 day after referral (1.0 days [IQR 0.0-3.0]).In total, median time from onset of symptoms to first FTC visit was 31.0 days (IQR 13.0-108.8).No statistically significant differences were observed between GCA+ and GCA− patients in all time periods described above. Total delay in patients diagnosed with GCA In addition, Table 2A describes median total delay from onset of symptoms to treatment initiation in GCA+ patients (n = 61), which was 32.0 days (IQR 14.0-78.5).GCA+ patients were seen at the FTC directly after referral (median 0.0 days [IQR 0.0-1.0]).At this first FTC visit, treatment generally was started instantly (median 0.0 days [IQR 0.0-1.0]).GCA+ patients were treated with high-dose GCs as indicated by the treating rheumatologist.For patients with isolated C-GCA (n = 41), median total delay was 21.0 days (IQR 11.0-73.5),while this was 57.0 days (IQR 33.0-105.0) in patients with LV-GCA or overlapping C/LV-GCA (n = 20) (p = 0.02). Time to first FTC visit stratified by type of symptoms To identify patients at risk for longer delay, Time from onset of symptoms to GP visit, referral, FTC visit, and treatment in days stratified by (type of) GCA (A) and type of symptoms (B) a Total number of patients per category is described b Number of patients in particular time periods is described, which differs from the total number of patient per category *Statistically significant (p < 0.05) α Statistically significant difference with cranial and constitutional symptoms (Holm-Bonferroni) β Statistically significant difference with isolated constitutional symptoms (Holm-Bonferroni) 1 Calculation only possible in GCA+ patients 2 Related to GCA (i.e.Anterior Ischemic Optic Neuropathy (AION), central retinal artery occlusion, diplopia) 3 Headache, jaw claudication, and/or scalp tenderness 4 Fever, weight loss, and/or fatigue ** Remaining patients presented with signs and symptoms that would lead to an irrelevant small patient group, for example with elevated CRP/ ESR or visual symptoms without other cranial or constitutional symptoms isolated constitutional symptoms (median 105.0 days [IQR 28.0-184.0]),while shortest for patients presenting with a combination of cranial and visual symptoms (median 14.5 days [IQR 4.3-29.5])(p < 0.001). Discussion This is the first study in the Netherlands that primarily investigated delay from onset of symptoms to first FTC visit in GCA suspected patients, including patients that were diagnosed with GCA as well as those not diagnosed with GCA.The median delay to first FTC visit was 31 days for the entire study population.No major differences in delay were observed between patients diagnosed with GCA or patients without GCA.We consider introduction of the FTC successful because all patients with suspected GCA were examined within a day after referral and GCA+ patients were treated with high-dose GCs immediately after diagnostic work-up.Also, our results showed that patients who presented with isolated constitutional symptoms had a longer delay. Hospital Group Twente is the first general hospital in the Netherlands that introduced a GCA FTC with US in the standard diagnostic work-up.Compared to existing literature, delay towards diagnosis in our FTC is shorter.The meta-analysis by Prior et al. [12] described an average diagnostic delay of 9 weeks in GCA patients compared to a month in the present study, illustrating the success of our FTC.Furthermore, this meta-analysis reports a delay of 7.7 weeks for C-GCA patients and 17.6 weeks for LV-GCA patients [12].This longer delay in LV-GCA was confirmed by our study.The study of Prior et al. also had some important limitations.First, delay was described as a secondary outcome measure and little information was available about how the information concerning delay was obtained.Also, the time frame covered in their study was from 1950 to 2013, in which disease awareness, knowledge, and diagnostic methods differed from today.In our study, we primarily investigated delay and incorporated different time periods within the referral and diagnostic progress in patients with suspected GCA who visited our FTC in more recent years.It has been reported that the introduction of FTCs may lead to a significant reduction in vision loss, mainly due to a shorter time period towards diagnosis and thereby early initiation of treatment [10,14].In line, we also previously described a case with a significant delay in diagnosis leading to iCVA, highlighting the importance of decreasing delay in GCA diagnosis [11].Due to the small number of severe ischemic complications in the present study, analysis related to symptom duration and complications was not relevant. In this study, patients were generally seen and treated with high-dose GCs within 1 day after referral to the FTC.Remaining delay was however considerable, and can either be attributed to the fact that patients are often unaware of the severity of their symptoms and therefore do not seek medical attention [15], or GPs might not recognize GCA at first presentation due to its generic symptoms and low incidence [16].Therefore, to reduce total delay, increased awareness of signs and symptoms, mainly alarming symptoms such as sudden vision loss, is needed.In our cohort, around 30-40% of referrals with suspected GCA were diagnosed with GCA in our FTC.Education of the elderly population and GPs and classification of patients using pre-test GCA: giant cell arteritis; 1 could only be determined in GCA+ patients probability tools may be helpful in increasing this percentage of positive diagnoses after referral in the future [17,18]. Although a retrospective design can be a limitation, it had major advantages for this study.First, it allowed us to include a substantial number of GCA patients despite a low incidence.Furthermore, the electronic health records in our hospital contain elaborate information of GCA suspected patients including GP referral, which allowed us to study different aspects of delay within the referral process of a GCA FTC.Inevitably, there were missing data in each time period studied.As data were collected for healthcare purposes and missing data are therefore probably at random, minimal bias is expected.We did not exclude patients when data were missing to optimally use the available data, meaning that the number of patients used can differ per time period. To conclude, this study shows that delay between onset of symptoms and FTC referral in GCA suspected patients still occurs despite a successfully implemented FTC.Patients with constitutional symptoms and extracranial manifestations had an increased delay compared to those with cranial and/or visual symptoms, who had already a delay of more than 2 weeks.Timely diagnosis and treatment remains crucial to prevent severe complications [7].To reduce delay, early recognition of GCA-related symptoms is needed.Interventions at patient level and education of referring physicians -including the use of pre-test probability scores -could aid in raising awareness of the urgency of FTC assessment for GCA suspected patients [17][18][19]. Author contribution M. v. N. primarily carried out the data collection and analyses.M. v. N. mainly wrote the manuscript with substantial contributions and critical revisions from C. A., E. C., M. V., D. B., and E. B. M. V. supervised the statistical analyses.C. A. and E. B. supervised the overall project.All authors discussed the results and contributed to and approved the final manuscript. Fig. 1 Fig. 1 Types of delay and referral routes of patients with suspected giant cell arteritis in a Dutch peripheral hospital (n = 205).Median time periods are described in days (IQR).*For missing patients (n = 17), the referring physician was unknown.Other spe- Table 1 d Cerebral Vascular Accidents or Transient Ischemic Attack e Color Duplex Ultrasound f Temporal artery biopsy g 18-FDG-Positron Emission Tomography/Computed Tomography Table 2B and/or symptoms less frequent in our patient group, such as an elevated CRP and/or ESR or isolated visual symptoms related to GCA.Time from onset of symptoms to first FTC visit was longest in the group of patients who presented with Table 2 Time in steps between onset of symptoms and treatment initiation in GCA suspected patients
v3-fos-license
2018-02-19T22:21:16.649Z
2016-12-01T00:00:00.000
29819646
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/tinf/v28n3/0103-3786-tinf-28-03-00253.pdf", "pdf_hash": "0738aa1b687ee498b6e877c978e6b1c579b29a09", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1343", "s2fieldsofstudy": [ "Education" ], "sha1": "0738aa1b687ee498b6e877c978e6b1c579b29a09", "year": 2016 }
pes2o/s2orc
Information literacy for inquiry-based learning Given the importance of focus on globalized curriculum, this study presents a review of the literature on issues related to the nature of learning contents and curriculum, especially the development of curriculum based on the research process inquirybased learning in terms of information literacy. Some hypotheses were formulated to explain the lack of studies on this topic, such as the level of development of information literacy programs, pedagogical training of librarians, and educational institutions’ perceptions of the importance of information literacy. Recommendations for further research on the topic were made. It was concluded that inquiry-based learning allow better integration of information literacy content providing more meaningful learning by encouraging reflection, student protagonism, and learning how to learn among others. Introduction Informational Literacy (IL) can be understood as the teaching-learning process needed to develop competences in order to seek for and use information effectively and efficiently (Gasque, 2012).This requires the integration and organization of IL into the educational curriculum, which can be structured in different ways. In the study "Curriculum and imagination", McKernan (2009) identified six models of curriculum design: development of disciplines and subject-centered curriculum, development of interdisciplinary topics, development of a student-or child-centered curriculum, development of a core curriculum, development of an integrated curriculum, development of a processcentered curriculum, development of a humanistic curriculum. The most traditional curriculum development is based on the concept that knowledge is divided into fragmented subjects organized by disciplines, for example, mathematics, science, and physics.The approach to curriculum development based on knowledge or fields of knowledge includes related disciplines organized in the same field.For example, social studies is a field of study that includes history, geography, economics, and sociology.The IL content can be organized into two different ways in these two types of curriculum.The first is based on the the creation of a new discipline within the curriculum.The second is based on the distribution of IL contents into the disciplines or subjects.In addition, IL contents can be taught in extracurricular activities that are complementary to the existing curriculum.Another curriculum structure is the learner-centered curriculum.In this case, the IL contents should be organized according to the learner's interests and needs.The curriculum can have a core structure, in which the core is the set of common knowledge and courses that are considered essential for learning.This type of curriculum is adopted in many countries and includes the national knowledge core for basic education, which, in this case, should also include IL contents. Another type includes the integrated curriculum organized around a theme, which allow teachers of different disciplines to teach the same subject unifying concepts, according to the literature or textbooks of each branch of study.In this curriculum model, the themes should be taught combined with one or more IL contents, for example: "public health information search procedures".On the other hand, the humanistic curriculum is based on values, customs, and the existential question of how to live one's life.This type of curriculum should emphasize more attitudinal aspects of IL.And finally, the development of process-centered curriculum is based on the procedures by which students can carry out investigations focusing more on learning how to learn than on the discipline content.In addition to the models identified by McKernan (2009), a curriculum model can integrate features of one or more models. The informational literacy contents are organized according to the educational institution context, teaching-learning concepts, and human, structural and financial resources among others.It is worth noting that, in recent decades, in general, learning or curriculum contents have been distributed primarily based on disciplinary parameters.This fact, according to Zabala (2002), is related to the scientific context in which science development processes led to the fragmentation of knowledge.However, the author points out that the purpose of education is different from that of science, and therefore it is important to consider globalized approaches, such as project-based work.In addition, according to McKernan (2009, p.100), "the literature on curriculum has established the objectives model as the paradigm for curriculum planning, which has been the most widely used model".This fact is related to the significance given to technical rationality and education, which have been considered as science since the first decades of the twentieth century. According to the Australian and New Zealand Institute for Information Literacy (ANZIIL) in a publication released in 2004, the most effective way of learning is by the transversal2 integration of the IL contents into the curriculum allowing students to interact with information and reflect on the practice.Harada and Yoshina (2004); Hmelo-Silver et al. (2007); Chu et al. (2011);McKinney (2013), among others, argue that there is evidence pointing to the effectiveness of pedagogical approaches based on projects for the improvement of learning outcomes. Based on the educational literature on the importance of globalized approaches, specially, in terms of inquiry-based learning as a curriculum development model (Hernandez & Montserrat, 1998;Zabala, 2002;McKernan, 2009), the present study presents a review of the literature on topics related to the nature of learning content and curriculum, particularly the development of curriculum based on research process -inquiry based learning -identifying studies that address the relationship between research-based projects and IL.The main objectives of this study are to stimulate discussion regarding the possibilities of integrating IL into the curricula of basic education and contribute to the literature in the field since the international literature available regarding this topic is scant and practically nonexistent in Brazil. Information Literacy Learning contents and curriculum Learning content can be understood roughly as the subjects, themes, or topics to be taught in the learning process.Therefore, learning contents are directly related to educational objectives.Zabala (1998) argues that the contents represent the educational intentions, i.e., they are related to "what to teach" and "what to learn" to achieve certain goals.Additionally, the learning contents are also related to the question "why teaching".Contents can be techniques, skills, attitudes, concepts, etc. Coll (1986) According to Gasque (2012), IL contents should encompass the concepts, procedures, and attitudes that allow seeking for and using information effectively and efficiently.In general, the IL standards include skills in dealing with information, such as searching skills; proper use of information; and familiarity with information technology; generic skills, such as problem solving, collaboration and teamwork, communication, and critical thinking.Lastly, they include values and beliefs, i.e., information should be used wisely and ethically, promoting social responsibility and community participation (Australian and New Zealand Institute for Information Literacy, 2004). The selection of teaching-learning contents indicate the purpose and importance attached by people and different countries to education; thus it is a dynamic and, at the same time, ideological activity.The reasons are that it is impossible to teach everything and there are constantly new contents to be taught.In many cases, the inclusion of new contents may replace older ones or result in a surplus of new contents and materials (Zabala, 2002).Furthermore, these contents should be considered in terms of learnability, i.e., according to the school level and the time available for learning. It takes a lot more time to learn difficult topics and subjects.It is estimated that the chess grandmasters need 50 to 100 thousand hours of practice to reach the level of competence.Attempts to cover too many topics quickly may hinder learning and can result in: (a) learning of isolated sets of facts that are not connected and (b) lack knowledge of content organizing principles (Bransford et al., 2007). As for the IL learning contents, it is important to remember that although the term "information literacy" first appeared in print in a 1974 report by Paul Zurkowski, there had been publications providing guidance on the use of school and public libraries since the 1920s in the United States.However, in 1988, the American Association of School Librarians and the Association for Educational Communications and Technology published "Information Power: building partnerships for Learning", expanding the focus to encompass lifelong learning and social responsibility. Generally, the IL learning contents are based on competency standards, which include three basic components, access, evaluation, and use of information.These core goals are found in most of the standards created by library associations and information centers, such as the American Association of School Libraries (AASL), Association of College & Research Libraries (ACRL), The Society of College, National and University Libraries (Sconul), and the ANZIIL in among others (Lau, 2006). Curriculum planning based on competency standards results from current education reforms, in particular, from the 1980s onwards, whose standards establish benchmarks for what any student should know and be able to do.Competency-based education requires clear and measurable norms.Accordingly, curriculum, assessment, and professional development must be aligned to the competency standards (Hamilton et al., 2008).Therefore, the standards must present goals, support qualitative and quantitative criteria, norms, and include statements expressed in relative terms, relating performance to norms derived from a reference population (Association of College & Research Libraries, 2012). Several studies have investigated problems in structuring contents based on competency.The first line of research concerns quality standard, which has not always been well-structured and clear.The second investigates how this type of education affects the educational practice.Another issue is how to prevent excessive assessment.Finally, some studies have investigated student learning progressions associated with the use of this approach.Recent research has shown that there has been progress in learning, but for reasons that are not yet clear (Hamilton et al., 2008). Many information literacy models undergo revisions to incorporate new perspectives.The model proposed by Sconul, for example, was revised in 2011.There are presently seven pillars based on two perspectives -theoretical and practical.That is, the learner must understand the issues of each pillar and be able to apply the knowledge.It is a three-dimensional circular model which indicates that the person is developing continually and holistically within the seven pillars.Information literacy is an umbrella term that encompasses concepts such as digital, visual, and media literacies, information handling, information skills, and data management. The Association of College & Research Libraries (2000) IL model for higher education includes five competency standards: (1) determine the extent of information needed; (2) access the needed information effectively and efficiently; (3) evaluate information and its sources critically; (4) incorporate selected information into one's knowledge base; and (5) use information effectively to accomplish a specific purpose considering ethical, legal, and economic aspects.The framework published in 2000 has been recently revised.The ACRL has recognized that the original standards do not provide enough guidance on visual and digital literacies, often considered subsets of information literacy itself.Furthermore, with the review of IL objectives for basic or primary education, the original ACRL IL model do not provide a continuum of learning for students moving from primary education to higher education. Accordingly, the new IL standards should be simplified, allow greater flexibility, eliminate library jargons, include attitudinal learning outcomes, in addition to the exclusively cognitive focus of the current standards.It should also include also other types of literacy, address the role of the student as content creator and curator, and provide continuity with the proposed standards for basic education (Association of College & Research Libraries, 2012). According to the recommendations of Australian and New Zealand Institute for Information Literacy (2004) and Lau (2006), as previously mentioned, there should be transversal integration of the IL contents into the curriculum.Therefore, it is necessary to understand how transversality occurs and how transversal topics are addressed.In educational reforms in many countries, there is a need to include the development of human values, such as citizenship, ethics, and ecology among others.These topics should not replace those previously included in the classical academic disciplines, and they should not be an "addendum" to the official curriculum, but rather "dimensions", which should form the basis for the development of the curriculum (Yus Ramos, 1998). Yus Ramos (1998) highlights that although transversality refers to curricular complexity and globalization, this practice has been interpreted differently by curriculum developers: as a set of moral norms combined with specific disciplines, for example, philosophy/ethics; as new disciplines linked to the classic disciplines but are scheduled separately; as separate didactic units incorporated to the academic content; as topics or subjects associated with special supplementary issues (environment day and non-violence day); as optional subjects to be included in the disciplines or not; as topics to be equally distributed into the disciplines, promoting the integration between subjects and disciplines; as a topic diluted in the curriculum and isolated topics that are not related to each other.The author points out that these interpretations are wrong and often lead to the trivialization of transversal topics or have a purely aesthetic effect.Sancho (1998) emphasizes that the need to teach using transversal topics, without questioning the basic teaching organization and the methods of teaching and without making any changes in school management including them into the regular schedule, has undermined the meaning of transversality as the central axis of pedagogical experience.Therefore, it is necessary to determine the best way to integrate IL contents into the curriculum in order to achieve effective and meaningful learning. The word curriculum derives from Latin currere, literally meaning "to run the course".Curriculum has been studied by various authors, such as Dewey (1902;1910;2010); Stenhouse (1975); Hernandez and Montserrat (1998);Zabala, (2002) and McKernan (2009).With reference to Saviani (2003), it can be broadly understood as the selection, sequence, and organization of the contents to be developed in teaching-learning situations.It includes knowledge, ideas, habits, values, techniques, resources, devices, procedures, and symbols depicted in school subjects/disciplines, indicating activities/experiences for learning consolidation and evaluation. There is need for intensified research on IL and curricular issues in the field of information science.The reason is that grasping the concept of curriculum, as well as the selection and sequence of contents, the distribution of contents by grade level or age, the limitations of traditional approaches, and the feasibility of alternative proposals, are crucial issues to integrate IL contents into school and academic contents. In Brazil, there are few articles available on this topic, and they address how library science courses integrate IL into their curricula (Lins, 2007;Sousa & Nascimento, 2010).This may show the level of application of this process in the educational context and the difficulties faced by researchers and librarians concerning psycho-pedagogical aspects of teaching and learning. According to the literature, even in countries where there is more research on IL, there are many programs in an experimental stage and few have been integrated in a cross-curricular manner.Harris (2013), for example, investigated the integration of IL contents into the curriculum of Quality Enhancement Plan (QEP) of The Southern Association of Colleges and Schools (SACS), between 2004 and 2011.The author organized 106 QEP proposals into three categories based on the focus given to IL goals, outcomes, and assessment: IL-Focused Proposals; IL-Integrated Proposals; IL-Optional Proposals.In the first case, IL development was the stated goal of the proposal.In this category, 18 proposals were identified.In the second case, including 58 proposals, IL was one of the primary goals and/or outcomes identified in the proposal.Finally, in the last category, IL was not listed as a stated goal of the plan although outcomes or IL instruction are included as optional or incidental components of the QEP; thirty proposals were identified in this category.Based on the results of the IL proposals, it was observed that the majority (37) included one or more learning outcomes related to the evaluation of information sources.The author concluded that the teaching plans focused on critical thinking, on the use of information to accomplish a goal, and on the location and effective and efficient selection of sources. The little flexibility of traditional models is a factor that hinders the integration of IL contents into the curriculum.McKernan (2009) argues that the technical and rational curriculum model, the dominant model since the twentieth century, emphasizes the ends-means rational planning by instructional-behavioral objectives. Accordingly, several authors have questioned the traditional models showing the need to organize the curriculum based on alternative approaches.Some studies show that traditional curricula does not help students "learn their way around" a discipline.They only provide routine training without teaching them to understand the whole picture; which does not ensure the development of integrated knowledge structures and applicability conditions.In other words, the traditional curricula specify objectives, which are not always considered part of a larger network; yet it is the network, connections among objectives, that is important (Bransford et al., 2007). Zabala (2002) is one of the authors that has questioned the traditional curriculum, identifying the need for a globalizing approach.The author's justification is based on research on human perception, associating knowledge with understanding situations from a global point of view due to the need to respond to tackle problems or a particular situation.Furthermore, in order for learning to be meaningful for students, it must be motivating, that is it should be based on learners' interest. The problem of curriculum design is to lead students to experience and understand things that truly matters in life (McKernan, 2009).Such an approach, proposed by Dewey (1902;1910;2010), sets out two principles of the progressive organization of curriculum materials.The first principle is that the teaching content should be based on common experiences of life.The second refers to the relationship between the progress of the content taught and students' maturity level.Among various educational recommendations, the author argues that "learning should be based on the internal conditions of an experience that leads to an active search for information and new ideas" (Dewey, 2010, p.82).Zabala (2002) presents several globalized curriculum methods; however, according to the author's opinion, due to their historical and present importance, the most relevant are the Decroly's centers of interest, the Kilpatrick's inquiry-based learning the Movimento di Cooperazione Educativa (MCE), and globalized projectbased work.The difference between these methods lies in the emphasis of students' efforts and production, as well as in didactic sequences. Among the aforementioned methods, inquirybased learning and global project-based work methods stand out since they are the most commonly researched and used methods for curriculum organization and IL process development. Information literacy projects According to Dewey (1910;2010) projects are a method to develop reflective thinking.Projects must be related to students' experiences.The author argued that sound educational experience involves continuity and interaction between the learner and what is learned.Therefore, the author criticized the traditional curriculum for its rigid organization and because it disregards the capacities and interests of learners. Dewey' (2010) proposal focused on teaching through real life activities because isolated teaching, disconnected from students' experience, does not prepare students for the real world.Accordingly, schools should work as a small community, with students involved in real life activities, creating opportunities for contribution and responsibility.Thus, learning would occur through problem solving, which involves the scientific method.The scientific method facilitates the formulation of concepts and explanatory theories since it includes the observation and data collection to select accurate facts.The reason is because it relies on several processes that tend to avoid hasty conclusions and are based on a proper understanding of meaning of things or topics. In the study "Experience and education" (2010), Dewey uses the term "purpose" with a similar meaning to projects.Purpose is what you intend to achieve; thus it involves foreseeing the consequences of an action, which is an intellectual operation.Such action involves observation, understanding the significance of what is observed, judgment, and planning. Teaching based on the learner's interests and experiences was the core value of progressive schools.However, Dewey (2010) stated that some of them misunderstood the concept since "projects" were not always educational projects.To be considered educational, they must fulfill four conditions.The first is based on the interest of students.The second refers to the intrinsic value of the project, that is, it should please and at the same time "represent something that is worth for itself in life" (Dewey, 1910, p.217).The third condition is to introduce problems which arouse curiosities and require information search.Lastly, the project should last for a reasonable amount of time to allow proper implementation.Moreover, the project should be an ongoing process and not a series of disconnected facts. Dewey's pedagogical beliefs were not restricted to the theoretical field.In 1896, Dewey opened the University of Chicago' s Laboratory School, an experimental school, to put his pedagogical ideas into practice.According to Westbrook (1993) at the center of the curriculum of the School was what Dewey termed the "occupation", that is, a mode of activity on the part of the child which reproduces, or runs parallel to, some of work carried on in social life.However, this school was not designed for the social reproduction, but rather to develop cooperative critical citizens for a democratic society.Students learned to do: to cook, to sew, to work with wood and tools.Along with these activities, writing, TransInformação, Campinas, 28(3):253-262, set./dez., 2016 http://dx.doi.org/10.1590/2318-08892016000300001reading, geography and arithmetic contents, among others, were developed, that is, when students recognized their usefulness to solve problems that confronted them in their occupational activities.Students shared in the planning of their projects, and the execution of these projects was marked by a cooperative division of labor, in which leadership roles were frequently rotated.Providing children with first-hand experience with problematic situations is the key to Dewey's pedagogy. Kilpatrick is recognized for structuring and disseminating the inquiry-based learning method, but it was John Dewey who first applied it in an experimental school of the University of Chicago.The projects were characterized by functionality, influence of Stanley Hall evolutionism, the theory of Thorndike's constructivist learning theory, and Dewey's socialist theories (Zabala, 2002). More recently, the projects have taken on a new configuration and name: global project-based work.Some authors see them as a progression of Project Works, aiming at the globalization of school content (Zabala, 2002).Hernandez and Montserrat (1998) "rediscovered" the projects and reported their experience with curriculum development through projects at the Universitat Pompeu Fabra, Barcelona, Spain.The authors concluded that learning how to learn was the focus of the experience, but cultural, personal, and idiosyncratic issues cannot be hidden under the eyes of students and teachers' affection.They added that, despite the insistence on teaching-learning processes, there should be concern with the outcomes, based on reflection and not only in the measures.Their study has been cited in several articles and research in Brazil, perhaps for being the first study to describe in detail the implementation of projects in a basic education school.This can help readers to envision a scenario with several important elements: actors, didactic and pedagogical issues, difficulties, and benefits among others. In short, John Dewey's pedagogy requires more committed teachers, who have the ability to understand how students think and what they know; those that can plan their actions with flexibility and focus.This requires considerable changes, not only in the classroom but in the entire school, which is not always easy.It also requires a flexible curriculum design, focused on the student's everyday problems.This explains why Dewey emphasized the fact that the path towards new education is more resistant and difficult. There are few studies on IL and inquiry-based learning in the field of information science.Reed and Straveva (2006) argue that IL teaching without a reflective thinking approach makes it a mere set of abilities.This statement was corroborated by Gasque (2006;2008;2012), who showed the importance of reflective thinking in the IL teaching through the use of research projects.This author sees projects as research processes aimed at solving problems.Within this perspective, they can be understood as a teaching-learning proposal focused on content globalization, i.e., the contents are addressed by topics and investigated in the classroom, where students are responsible for their own learning. The inquiry-based learning originated from John Dewey studies, and more recently, the project-based work, understood as a reinterpretation of Dewey's proposal, are equivalent to research projects, based on the scientific method to solve problems.Both projects include, in general, identification of problems, hypothesis, objectives, justifications, information search, methodology, data collection, analysis, and conclusions.It is important to understand that in a research/investigation, the reflection on the elements and relationships established in the process should be emphasized so that they are fully understood.Moreover, the research topic should not be the focus of the investigation, neglecting the understanding of information use and search processes (Gasque, 2012).Hepworth and Walton (2009) argue that the IL contents should be integrated into the curriculum through problem solving since it allows thorough reflection, promoting a more meaningful learning and didactic transposition to other contexts. Although there is evidence of learning improvement through inquiry-based learning, according to Chu et al. (2011), there are very few studies on IL investigating the use of these strategies.For example, when searching on the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes, Coordination for the Improvement of Higher Education Personnel) web TransInformação, Campinas, 28(3):253-262, set./dez., 2016 http://dx.doi.org/10.1590/2318-08892016000300001portal 3 only few articles were found using the terms "projects and information literacy"; "IL and Dewey"; "IL and Kilpatrick"; "IL and research-based learning".One of the few articles identified was written by Yu et al. (2011), who recognizes the potential of the use of projects to integrate IL.Their results show that teachers perceive IL as competencies of Information and Communication Technology (ICT) and that they do not teach the contents of IL during project supervisions.Moreover, only the basic contents of IL were integrated into the projects.Harada and Yoshina (2004) They also recommended that libraries contribute to the research process by: (1) helping students widen the initial research sources; (2) helping students raise more complex research questions using web search techniques; (3) helping students with research strategies using keywords and discussing the different types of resources used in research; (4) teaching effective note-taking strategies; (5) helping students synthesize contents through conceptual maps, flowcharts, etc.; and (6) helping teacher and students develop indicators to assess student performance throughout the research process and evaluate the process outcomes. Three articles addressing the assessment of IL integration through research projects were identified.The article by Gehring and Eastman (2008), carried out in a biology course at the Connecticut College, evaluated the results of implementing IL through projects.The students were given an IL tutorial and carried out tasks using primary literature analysis integrated with laboratory research projects.The results show that the students increased their ability to identify and use adequate sources of information.In addition, standards used by students to seek and use information were identified.Self-assessment responses indicated that students recognized the impact of IL on their information skills and were more confident for future biology courses. Another study, conducted by Chu et al. (2011), investigated the effect of the combination between the collaborative teaching approach with the inquiry-based learning for the development of information literacy and information technology skills in primary school.The results indicate that the program had a positive impact on the development of different dimensions of IL and information technology skills of the participants. Finally, the study by McKinney (2013) reported the evaluation of development projects undertaken at a United Kingdom university focusing on IL and on the development of IL capabilities to support learning.The results showed that students developed individual IL capabilities and learned how to effectively use library resources.The students also recognized the importance of IL for their personal and professional life.Moreover, the results also showed the importance of considering the development of IL in the context of projects.Within this perspective, librarians and IL experts should actively participate in the project development.The author points out that the inquiry-based learning focused on the development of IL need to contextualize subjects that are meaningful to students.Teachers need to explain to the students that IL development is the focus of specific activities and discuss the concept of IL with them. Although there are many studies on the importance of the project-based pedagogy in the area of education, there are still few initiatives that relate this approach to the development of IL process.Additionally, the (few) studies identified in the literature review address projects carried out in a discipline; they do not focus on a curriculum change in order to globalize the contents.Furthermore, the three articles that assessed the implementation of projects integrated with IL content, did not evaluated the learning content.Ideally, project results should show improvement in the performance of search activities and use of information, as well as the discipline contents or topics covered. As previously explained, some hypotheses to explain the lack of research in this area are related to the fact that only recently has this topic been investigated and to the often inadequate pedagogical training of librarians.Another hypothesis concerns the discussion of how the educational institutions and teachers perceive the importance of the IL process and how much they are willing to invest in it.This is reflected in the sequence and organization of IL contents into the curriculum, which are related to the teaching-learning concepts.Therefore, investing in research on this topic is of extreme importance. Conclusion The implementation of IL programs is linked to the sequencing and organization of content into the curriculum, which are related to the pedagogical concept.Although there are recommendations in the literature addressing the importance of integration of IL contents in a cross-curricular manner, it was observed that there this concept has been misunderstood.Due to their potential, the research/project methods enable better integration of IL contents into the curriculum, providing students with more meaningful learning by encouraging reflection, student protagonism, and learning how to learn among others.Yet, there is little research on Information Science and Education.Some hypotheses formulated to explain such situation are related to the level of development of IL programs, the pedagogical training of librarians, and to educational institutions' perceptions of the importance of IL.Finally, based on the aforementioned discussions, it is recommended to stimulate and support research on curriculum and IL, in particular, inquiry-based learning. Aknowledgements The authors are grateful to Dr. Isabel Cristina Michelan de Azevedo for her careful reading and suggestions that greatly improved this manuscript. proposed the following content classification: a) Conceptual content: encompass facts, concepts, and principles (what one should know); b) Procedural content: techniques and methods (what one should know how to do); c) Attitudinal content: include values, attitudes, rules (how one should be).
v3-fos-license
2014-10-01T00:00:00.000Z
2012-08-24T00:00:00.000
13396623
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/12/9/11451/pdf", "pdf_hash": "ef9e962e2aae360fee0202545b616aa134fa5c92", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1347", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "47e09fdb8c9897d98ab30fde0fe0896ea306f25f", "year": 2012 }
pes2o/s2orc
Provision of Ubiquitous Tourist Information in Public Transport Networks This paper outlines an information system for tourists using collective public transport based on mobile devices with limited computation and wireless connection capacities. In this system, the mobile device collaborates with the vehicle infrastructure in order to provide the user with multimedia (visual and audio) information about his/her trip. The information delivered, adapted to the user preferences, is synchronized with the passage of vehicles through points of interest along the route, for example: bus stops, tourist sights, public service centres, etc. information systems for on route travelers must not resolve because they use mobile telephony infrastructure, such as 3G/UMTS. This paper is structured in five sections. The following section is dedicated to explaining relevant related works in the field of on-route passenger transport information systems. The main technological challenges and innovative aspects of the system are presented in the third section. The system itself is described in the fourth section. The system validation tests are described in the fifth section. The last section is dedicated to presenting the main conclusions and future work. A set of related works are found in the field of ITS architectures and frameworks for information services related to road transport, being two examples of this type EsayWay [4] and CVIS [5]. Both ITS initiatives make use of the basic technological infrastructures available on roads (sensors, vehicle-to-infrastructure (V2I) communications and traffic monitoring) in order to provide road transport information services. In the case of EasyWay, these information services are grouped into three areas: Traffic Management, Freight and Logistic Management and Traveller Information Services. In the case of CVIS, the services group are: Cooperative Urban Applications for improving the efficient use of the urban road network at both local junction and network level, and enhance individual mobility, Cooperative Inter-urban Applications for enabling cooperation and communication between the vehicle and the infrastructure on inter-urban highways, Cooperative Freight and Fleet for increasing the safety of dangerous goods transport and optimise transport companies' delivery logistics and Cooperative Monitoring for developing specifications and prototypes for the collection, integration and delivery of real-time information on vehicle movements as well as on the state of the road network. The main goals of the traveller information services of these ITS initiatives are the road transport safety improvement and the reduction of the traffic congestion and CO2 emissions. The information produced by these services are conceived to private transport user (drivers) and consist of real time warnings related to relevant incidents on the road, road signalling, weather forecasts, and travelling time predictions. This kind of information is implemented by short data messages and using mobile telephony technology (3G/UMTS). The proposed system is a case of travel information system conceived for tourists travelling in public transport, providing interesting information, from a touristic point of view, adapted to the user preferences, for example language and media, and using the local communication infrastructure available in public transport vehicles, for example: Bluetooth and WiFi. Therefore, the following technological challenges has been faced: first, the amount of data associated to tourist information is potentially high, second the information can be accessed by several travellers of the same vehicle and using different user mobile devices, and finally, the use of local mobile communications implies the development of proper techniques for services discovering and connection establishment with a latency time assumable by the user. Conceptually, the proposed system could be considered as a case of traveller information service to be integrated in the infrastructure of the mentioned ITS frameworks. Technological Challenges As explained in the previous section, nowadays traveller information services are characterised by a centralised information structure; the users, using mobile telephony infrastructure, must connect to a remote server in order to gain access to the information. Generally, these services are based on Web Service technology and the user mobile devices must have advanced resources such as GPS. Another common property is that the information is related to relevant events occurring while travelling, this information interchange being implemented through short data messages. Important limitations of these services are: first, the functionalities for adapting to the user preferences are limited, and second, these services are conceived for private car drivers. The system described in this paper is a system that provides tourist information in the context of a journey by public transport travel on the road. This information service has a high degree of accessibility for the users (tourist travellers). The information provided must be accessible through general-purpose mobile terminals that do not necessarily perform well. Therefore, access to the tourist information must be through local wireless communication networks that are available to these kinds of devices. The architecture of the proposed system meets the following requirements:  Device heterogeneity. Tourist information should be available to a variety of general-purpose user mobile terminals.  Interoperability. The system must be able to operate in the different technological and operation contexts of the public transport operator.  Scalability. The system allows new elements to be added to the infrastructure that permit newly developed information services to be added or make them accessible to a greater number of users.  Spontaneous interaction. The system allows the spontaneous interaction with users by the vehicle infrastructure; this number of users is potentially high. The software system can be characterised by its capacity to integrate its surrounding physical and technological environment. Consequently, it can operate autonomously and spontaneously in different vehicles environments. To attain these functionalities, the system accepts that the number of users, devices and applications that intervene in an environment of public transport is unpredictable. A second principle accepted by the system is that the distinction between public transport environments must be made by boundaries that mark differences in content, and these boundaries do not need to limit the system interoperability. For this reason, a set of invariable operating principles that govern the execution of the system must be specified. Because of these characteristics, the system architecture is deployed in two areas. The first area is the infrastructure of the public transport network, specially the vehicle infrastructure. This includes a basic set of components, comprising all the elements that allow user applications to access tourist-related information. The second area is user devices, comprising all the components that have the capacity to integrate into the different vehicles environments and that facilitate access to the information produced by the tourist information services. Related Works Intelligent transport systems (ITS) aim to improve the safety, comfort and efficiency of both public and private transport. Advances in mobile communications have propitiated the development of infrastructure that enables communication between infrastructure and vehicles (I2V) as well as communication between vehicles (V2V) to take place, which has, in turn, led to the development of ITS including new services [6]. Giannopoulus [7] describes how the use of new information and communication technologies, especially those that are able to operate in any place or context, will change the way transport companies work, particularly in the case of the distribution of information to the traveller. According to this author, transport systems should be efficient, reliable and easy to use as well as highly adaptable to the needs and preferences of the users. In recent years different work has been carried out to identify the requirements that information systems for travellers must fulfil, especially travellers with special needs, such as the blind, deaf, or elderly. For example, Mitchell [8] and Waara [9] raise the information problems posed by elderly and disabled travellers using public transport and Jakubauskas [10] describes how smart transport systems can improve urban transport accessibility for passengers with reduced mobility. On the implementation side, in the bibliography we can find several references to works that describe specific proposals of passenger information systems. In the field of information services for private transport passengers, services aimed at improving passenger safety using technological infrastructure based on intelligent sensors, mobile communication and location systems are particularly significant. This type of infrastructure is deployed in vehicles and on roads, giving rise to what are known in as smart roads. As examples, we could cite the Jang's proposal [11] of an environment for the development of on-board information services for the driver aimed at enhancing safety during the journey, and that of Pé rez [12], who proposes a sensor system for infrastructure to vehicle (I2V) RFID communication that can transmit the information provided by active signals placed on the road to adapt the vehicle's speed and prevent collisions. The literature also includes references to the development of surveillance systems in public transport infrastructure to improve the public's safety in case of natural disasters of terrorist attacks. An example of this kind of systems can be seen in Proto's work [13], which describes a surveillance and monitoring system for large transport infrastructures (airports, transit stations, motorways, etc.) based on a network of local sensors and others distributed in the infrastructure of large transport networks. The technology available in vehicles varies depending on the kind of vehicle and the type of services to be provided. However, international standards covering on-board architectures have already been put in place, an example of which can be seen in ISO/DTR 13185-1.3 [14]. In our opinion, systems conceived for people with special needs are especially interesting. In this vein, we find the work of Turunen [15], who describes a system that provides audio information to guide blind people in an intermodal transport context, where the user accesses the services through his mobile device (PDA or mobile telephone). Sanchez [16] describes the AudioTransantiago system, an information system that facilitates access to the Santiago de Chile public transport network to blind people. Barbeau [17] describes a system named Travel Assistant Device (TAD) that gives special needs users equipped with a mobile telephone with GPS, information about their trips, specifically for users who have planned their journey in advance. The system guides them by providing information in real time as to their position, warning them if they deviate from their route and notifying them when they have to ask for the bus to stop. In the specific case of information systems for tourists, we highlight the case of the system proposed by O'Grady [18], which provides information on the route, guiding a traveller from one point in the city to another on foot. Finally, in the field of systems inspired by the ubiquitous computing paradigm, Arikava [19] proposes a multi-modal transport information system. All these systems share a common architectonic characteristic: they have a client-server architecture where the information is located in a central server and the clients access the services by using Web technology from their mobile devices. The system described in this paper has a distributed client-server architecture, so the Server is executed in the vehicle in which the passenger is travelling and the communication between this Server and the client application that is executed in the mobile device takes place thanks to the local communications infrastructure available in the vehicle (Bluetooth). Description of System The objective of our system is to develop an infrastructure for the distribution of multimedia information to mobile devices. This system must be able to provide contents (audio, video, text) to the travellers as the vehicle travels past certain relevant points on the route. It adopts the goals of traditional public road transport information systems for passengers. These kinds of systems provide passengers with static information, such as departure timetables and estimated times between stops, fares, etc., or dynamic information, such as anticipated times of arrival of vehicles, changes in circumstances, delays, etc. These information services can normally be accessed on Internet pages or are provided in panels placed in stations or at stops. The proposed system provides the information directly to the user by means of personal mobile devices, mainly cellular telephones, which are commonly used nowadays. The direct interaction between the user mobile devices and the public transport infrastructure, basically the transport companies' production systems, is carried out in such a way as to offer the user information immediately, and it is this type of interaction constitutes the system's most noteworthy characteristic. In this sense we have used the philosophy of the ubiquitous computing and ambient intelligent (or AmI) paradigms, trying to integrate technological elements into the users' daily life in a transparent way that requires the smallest possible adaptation effort on the part of the users. These technological elements are already in their hands and our job is to maximise the advantages they offer as Fuentes [20] proposes when he explains the three properties that the AmI devices must fulfil, benefiting both the user with enhanced facility, flexibility and reliability in information access, and the transport companies, who benefit not only from improved information distribution, but also from the external resources provided by the users. For this reason, the system constitutes an innovative tourist information system: during a public transport vehicle's journey, the system can offer information concerning points of interest, such as cathedrals, monuments, shops, buildings, etc. by audio, video or text contents, to the travellers. In the system developed there are three major elements: the Client Application, the on-board Information Server and a repository from whence the user can obtain the Client Application. The Client Application runs on the user's mobile terminal, the on-board Information Server executes on a computer that forms part of the infrastructure of the vehicle (bus) and the repository is located in a Web site. From the repository, the user can also download information supplied by the company about different documented routes. Each documented route is represented by a package with multimedia files that will be used by the Client Application during its normal operation. A user must obtain the Client Application and the package that contains the data files for each route he is interested in. During the trip, the Client Application communicates with the bus's on-board computer in order to get all the data required to assist the traveller. This is done using the on-board Bluetooth communication infrastructure because this is the technology most frequently supported by user mobile devices. Another reason for using this technology is to avoid the communications infrastructure used by the production processes that run on the vehicle, such as WiFi. The information service should interfere as little as possible with the production processes running on the transport infrastructure and it must use the infrastructure resources as little as possible. From the perspective of system users, we can distinguish four main roles:  Administrator or content manager: the user is in charge of managing the contents in the repository. It uses the application installed in the central station and its function is to load the contents and associated position tables.  Activator of the service: in public transport, this role may not exist as the activation of the service can be carried out automatically using the infrastructure available in public transport vehicles. By contrast, in discretional passenger transport there has to be a person in charge of activating the service in the on-board Information Server at the beginning of the journey. The best person to carry out this action is the vehicle driver, although this action should not affect his/her main function of driving the vehicle.  Clients: these will be the users of the application to be installed in mobile terminals. Apart from permitting the installation of the application in their devices, a basic initial configuration will also be required.  Contents generator: although this user does not form part of any of the elements of the system implemented, it plays an important role. In order to access all the information available about a route, the information must be studied carefully. Once this process has been carried out, we will be in a position to include these new contents in the repository so that the clients can download them. Currently, the online distribution of multimedia data is not available, because the prototype developed uses Bluetooth and the distribution of multimedia information by means of this technology is too slow. Therefore, and bearing in mind that the goal is to synchronize the delivery of information with the passing of the vehicle through a sequence of places of interest during the route, we thought was more feasible that the user download the files in advance, so that the Information Server sends a message to the client application signalling which file to reproduce and the exact moment the reproduction should be initiated. Vehicle Infrastructure The system assumes that vehicles are equipped with all the elements required for them to be able to control their activity autonomously. In the context of public transport, this means that the vehicle infrastructure has all the resources needed to perform tasks related to the control of payment and planning without a permanent connection to a control centre. From a functional point of view, these elements can be grouped as follows ( Figure 1): Figure 1. On-board production subsystem.  On-board computer: with the computing, storage and communication resources required to execute the process related to production activities. In our case, we have an embedded computer configured by a low-power processor, 64 Mbytes of main memory, solid state disk of 1 Gbyte, serial communications interfaces (RS-232/485), network interface IEEE 802.11 and a Bluetooth interface.  Positioning subsystem: this is configured by all the elements providing information to the system as to the vehicle's location. In our case this subsystem is formed by a GPS receptor.  Communication subsystem: for transmitting and receiving information (voice and data). Long distance communications are supported by public infrastructure (radio, mobile telephones) or by private infrastructure (normally radio systems). In our case, a trunking radio public infrastructure is used; the data travels in short packets of data associated to relevant events. A wireless local net (IEEE 802.11) is used to transmit large amounts of data between on-board systems and the company's information system.  Payment subsystem: configured by the elements required for on-board payment, normally: a driver console and contact-free card terminal.  Sensors subsystem: these elements enable the on-board system to access critical parameters related to the safety of the vehicle (for example, the open doors alarm), electrical parameters (for example, the battery voltage level) or environment (for example, temperature). The elements of this infrastructure used by the system are: the on board computer, the positioning subsystem and the communications subsystem. Conceptual Data Model Interoperability is an important characteristic of efficient public transport. Interoperability means the capacity to integrate applications developed by different suppliers. This integration enables the exchange of data between different software products to take place. For this reason, operators and authorities are interested in using standard specifications to facilitate this integration. A conceptual data model specification exists in Europe. This model, named TransModel [21], includes ontology, a set of entities and relationships about basic data needed to describe the network, the handling of the different data versions and the information needed for different domains of public transport, including: tactical planning (vehicle scheduling, driver scheduling, rostering), driver disposition, operations monitoring and control, passenger information, fare collection and management information and statistics. The system uses this conceptual specification including data descriptions that go far beyond the planned timetable, which is the main source of traditional timetable information, but does not take into account any dynamic issues. Specifically data concepts of this specification refer to passenger information facilities, conceptual components of a passenger trip, definitions needed to calculate trip duration, the times at which individual stops are passed on journeys and service modifications that are consequences of exceptions to the original plan. Basically, the system uses elements of the topological network definition (lines and journeys), geographical information and information regarding specific types of passenger. From the point of view of the formal context representation for ubiquitous contexts, said representation follows the specifications introduced by Hervas [22]; these specifications identify the initial requirements to model the context. Client Application In order to deal with the heterogeneity of users' mobile devices of, the Client Application has been developed in the form of a Midlet, so each mobile device is required to implement JavaME ( Figure 2). This election has been made because, in our opinion, it is a way of guaranteeing that this application provided by the system can be executed in a large number of mid-range terminals and the adaptation enabling it to run on other types of terminals, such as Android terminals, is relatively easy. The heterogeneity of mobile devices in the market also affects the way in which data are stored and organized in the devices, making it impossible to take a common file system structure for granted; consequently, static data required by the application are embedded in the Java application itself. The first step to be performed in the Client Application is its configuration. In order to achieve the configuration, the user has to select the communication technology to be used-only in terminals with alternative communication technologies, such as Bluetooth and IEEE 802.11. When the user begins a guided trip, he must specify his preferences: define the bus route (trip) that he wants to take, his destination and how he wants to be informed. At the moment the system has two modes of reporting information:  Arrival alert. In this mode, the passenger is notified at the arrival to the destination specified in advance. The warnings are issued three times: when the vehicle is at the bus stop before the one selected, the second warning is given just before arrival and the third warning is issued when the vehicle stops at the destination bus stop.  Detailed guidance. In this mode, the user is notified at each point of interest that the vehicle comes to during its journey. The points of interest are selected following a criterion determined according to the intention for which the system is programmed: tourist, urban information, cultural places, administrative centres, etc. Figure 2. Client application screen executing on a mobile phone. As far as the graphic interface designed by Client Application is concerned, as this is an application for mobile devices, a simple interface that is easy to use has been chosen. Its simplicity is based on the following factors:  Limitations in the features of the mobile devices. Although the latest generation devices have improved considerably, they still suffer important restrictions in terms of screen size and data processing capacities.  The wide variety of target users makes it important to ensure that the application is user friendly.  Lastly, the success of any application depends on how easy it is to use. Figure 3 shows that the execution flow from the Client Application and the different forms of interaction with the user. The first screen shows three options:  -Exit‖, which terminates the execution, showing the disconnection form.  -Configure‖, which facilitates access to the configuration screen. From the configuration screen, when we press the buttons -Save‖ or -Cancel‖, we access the first screen once again. Moreover, it is also possible to access the forms to select the work route by pressing the -Select Route‖ button. In this form, there are three options: -Open‖, which opens the file selected to continue with the operation; -Select‖, which allows you to select the configuration folder, and -Exit‖, which takes you back to the configuration screen.  -Next‖, which allows you to move forward in two different directions: to the configuration screen, if configuration has not been effected, or to the initial form to initiate registration in the server, if configuration has already taken place. Once acceptance has been given for registration to take place, a screen comes up indicating that this process is underway, and when it is completed, the registration confirmation screen comes up, informing the user that the device is ready to start to reproduce the contents once a place of interest has been reached. From this screen the application can also be terminated by selecting -Exit‖, in which case the service is asked to terminate the client's registration. The reproduction screen presents various options:  Volume Configuration: up, down or silence.  Reproduction options: pause or continue.  Exit, which terminates the execution of the client's application.  Once the Client Application is in -reproduction‖ mode, depending on the information provided by the server, several information screens can come up automatically: a screen with information sent by the server to the client; a screen informing that the service has come to an end or a screen informing that an error has taken place. In the last two cases, the execution of the Client Application will terminate. Application The communications system has been designed based on a clsCommunication type that manages the communications regardless of the protocol used (Bluetooth, WIFI, etc.). The class diagram of this subsystem can be seen in Figure 6. Finally, and to round off the Client Application design, Figure 7 shows the class diagram of the contents reproduction subsystem. Now that the Client Application modules and design have been explained we can explain how it is executed. It must be remembered that the main objective of the Client Application is to provide information of interest to the client, reproducing appropriate sound files depending on the specific point of the route at which the vehicle is located. Initially, the user has to choose the communication infrastructure to be used: Bluetooth, WIFI, or others. Subsequently, the Client Application must automatically carry out the steps required to activate the necessary services that allow it to use the infrastructure chosen. It then starts the search among other devices that offer the service chosen. Once the on-board Information Server has been identified, the connection is established. In order to facilitate subsequent connections established by the server with the client, in this first connection, once the server has been selected, the client sends the necessary information so that the server knows that the device is using its application; this means that there is no need to repeat the search process for a device each time it sends a new communication to clients. When a device stops using the application, the server is informed and said device is eliminated from the list of devices connected. Once it has registered, the Client Application waits until the server sends it a file to be reproduced. During reproduction, the Client Application continues to wait for new communications. Finally, if the client terminates the application, this event is communicated to the server and it eliminates said device from the list of devices connected. The structure of the data packets used in client-server communication is simple. It is formed by two fields: the first is called Information Type, which is an integer value that identifies the type of information so that the client knows what to do with the information. The second field is called Information, and it consists of a string of characters that, depending on the Information Type field, may be a command, information to be shown or information about the service. The -#‖ character is used to separate fields. Table 1 gives definitions for the types of information and possible values and meaning of the information field in each case. Server message indicating that a message must be shown to the client with the information given in the field of the same name. This may be useful, for example, for indicating stops. The Information Server The Information Server is installed in the vehicle infrastructure. Its task is to receive subscription requests to the service from clients and to send the corresponding messages to them when the vehicle comes near to a point of interest. The service will begin to operate after a command sent by the vehicle to start the Server. The Server will not require much interaction with the vehicle infrastructure. The working of the on-board Server can be described as a machine that goes through the following stages ( Figure 8):  Start-up (S0). At this stage, all the control structures are initiated, all checks are made and connection with the on-board infrastructure is attempted.  Normal working stage (S1). This stage can be reached from S0 and S2. It is reached from S0 when all previous checks have been carried out and connection is successfully made with the infrastructure. It is reached from S2 when the anomaly detected has been overcome and communication is operative with the infrastructure and the user applications that have previously been registered.  Error stage (S2). This stage is reached from S1 when an error has been detected. The error in question may be internal to the server application itself or external, such as a loss of connection with the on-board infrastructure. If the error can be corrected, stage S1 is returned to. Otherwise, S3 is reached, or stage S2 will be maintained until the error has been resolved, for example if communication with the infrastructure s lost, it will wait until communication is re-established.  End stage (S3). This stage is reached from stage S1, as a result either of a request by the infrastructure to terminate execution or of a serious error. We will now describe the set of datagrams that travel between the vehicle's infrastructure and the server, thereby giving regular information as to state of the vehicle's infrastructure. Firstly, the fields that make up the different datagrams are described in Table 2, and then the datagrams used are listed and, lastly, a description of each is given. The different datagrams are listed in Table 3, together with a description and the different states of the server application in which the datagrams in question are used. These datagrams basically describe the situations that need to be communicated to the server by the vehicle's infrastructure. Let us know look at how each datagram is structured and its use. The datagram showed in Table 4 is used to indicate that the vehicle's infrastructure is in error mode and consequently the Server needs to communicate to the clients the fact that the service is not available. The datagram represented in Table 5 informs us that the vehicle has gone into -not in service‖ mode. No service has been started so the Server cannot attend any requests. The datagram described in Table 6 indicates that the vehicle is on an -Operational Stop‖, i.e., in transition between two journeys. We do not yet know what the next service the vehicle will offer will be so the server is unable to receive requests. The datagram showed in Table 7 gives information about the starting up of a vehicle's service. It marks the beginning of the service. As of this moment, the Server has to admit subscriptions because it has the necessary information. The datagram described in Table 8 is the most important because it gives information as to the arrival at each stop. The datagram showed in Table 9 indicates that the vehicle has had to stop the service for some reason. The passengers may have had to change vehicle. In principle, this would mean that the service would terminate. The points of interest are compiled in a data base that associates a GPS position with an attribute that can be textual information to send to the client or the description of a file that the client has to reproduce. In both cases this information is sent to clients who have subscribed. In terms of the GPS location of a point of interest, a precision correction must be incorporated by setting a margin of error around the point in question. The size of this margin will be defined according to geographical circumstances. From the point of view of the data, content management is supported by a data base comprising three tables: a table of routes (Routes_Table), one including points of interest (InterestPoint_Table) and the third that links routes and points of interest (Routes_InterestPoint_Table). The on-board data base is represented in Figure 9; in this figure the diagram shows the relationship between the tables: The routes table contains a set of lines that make up the transport company's network. It contains two fields: the route number (RouteID) and its name (RouteName). The table giving points of interest is very important, as it holds most of the information that the service will process and communicate to clients. It is made up of the following fields:  PointID: An integer-type field that uniquely identifies the point. This will also be the table's primary key (PK).  PointName: Representative name of the point of interest. A string-type field.  File: A string with a maximum size of 255 characters. It indicates the file to be reproduced that is linked to the particular point of interest and is therefore what clients have to download. It will be made up solely of the name of the file and as no two files with the same name may co-exist, it must be the only one stored in its folder with that name.  Longitude: decimal-type field representing the Longitude of the coordinate at which the point is located. It will be expressed in degrees and decimals.  Latitude: decimal-type field representing the Latitude of the coordinate at which the point is located. It will be expressed in degrees and decimals.  Height: Another decimal-type field, this time representing the height in metres (above sea level) at which the vehicle is located.  Processed: This Field indicates whether or not the point has previously been chosen from a different position. This field is necessary as a point of interest appears when the vehicle comes into a radius of proximity, so it is quite likely that in subsequent consultations with the data base, several coordinates fulfil the conditions pertaining to the same point of interest. The coordinate has been separated into three fields to facilitate storage, on the one hand, and, more importantly, data retrieval. Moreover, in order to make the search process easier, two indices I1 and I2 have been created. The first is an index for the Longitude field while the second corresponds to Latitude. No index is used for the height field for two reasons: firstly, it would not enhance the searches, and secondly, because it would penalise data input and elimination. In any case, this will not have any serious consequences because this type of operation is not very common and tends only to take place at the beginning of the software installation. At the same time, a restriction of uniqueness has been established for the coordinate but said restriction has not been established on the field that stores the name of the associated file (Field File) as it could be the case that two different coordinates had the same associated file. It is also semantically correct because there may be two routes that go past the same point of interest but at different coordinates, for example if one goes past the front of a cathedral and another passes the back end. Given that several routes may coincide in one or more points of interest, the routes must be linked to their points of interest. This linkage is reflected in the Routes_InterestPoint_Table table that contains two fields: Route Identifier (RouteID) and the point of interest identifier (PointID). This table, apart from the primary key made up of both fields, has two foreign keys that link the RouteID and PointID Fields, with the Route and Points of Interest tables, respectively. This data base must be complemented with the set of audio files that can be reproduced. From a design point of view, the Information server comprises three modules ( Figure 10):  Positioning module: this module is in charge of obtaining the coordinates of the positioning system and to check if its current position includes points of interest for tourists. To this end, it has to access the database, so it needs to use the data access module.  Communications module: this module manages the service that is published and offered to clients. It also manages the incoming communications requesting registration and the outgoing communications requesting files.  Data Access module: this module facilitates all those methods required to manage the data base data.  Location: This task is permanently asking the infrastructure of the vehicle for its location. With this information it checks whether the vehicle is in the neighbourhood of a point of interest.  Warning messages: When the location task detects that the vehicle is at a point of interest, this task must inform all interested clients with a message containing explicit text about the point of interest or instructions to the client application telling it to reproduce some of the multimedia files associated with the route in question. There will be a thread of this task for each subscribed client because all interested clients must be notified at the same time. Figure 3 shows the execution flow of the Information Sever. Now that we have explained the different modules and design of the Information server, we will go on to describe the execution flow in the said application. Once the service has been initiated and Bluetooth activated, the registration of the MYTGIS service takes place in the Service Discovery Protocol server. The localization and management of registration threads are then created; these threads will remain active throughout the execution of the service. From this point on, therefore, there are three concurrent execution flows, each associated to the three threads created. The localization thread takes care of checking the current position and checking relevant client information. The registration management thread is ready to receive client requests to carry out a registration or disconnection operation. In the thread that manages communication with the devices, the following flow of actions occurs: first of all, the device is put on hold for clients who request use of the application; when this happens, the new device is registered and included in the system as a registered device. This thread also takes care of requests received from devices wishing to abandon the system. The localization thread constantly checks to see if the vehicle's current position has changed. Each time it moves, the thread carries out a check in the database to see if there is any point that coincides with the current location, which would mean that it is drawing close to a point of interest. At this point in the process, the set of active devices is consulted and a thread created for each of them, to communicate the new information to be presented, in the selected format, to the user. The main thread delegates the work to the threads described, apart from the user command interface, which enables notifications to be sent regarding a change in location, the sending of information or commands to clients. The only command that leads to the main thread terminating its execution is the -Exit‖ command. System Validation Tests In this section, the tests carried out to validate the system developed are described. Given the aim of the system and its architecture, the tests have focused on analyzing two key aspects that affect the validity of the system: local communication between the Information Server and the Client Application and the reliability of the information provided to the passenger. data +getparameters() +registry(entrada address, entrada devicename) +unregistry(entrada address) +get_info_position(entrada latitud, entrada longitud, entrada altitud, entrada diferencial, entrada argslst) +mark_treated_point(entrada PuntoId) +restaritinerary() +get_active_clients(entrada devices) +execute_query_sql(entrada query, entrada res) +execute_non_query(entrada query) data.c The validation tests were carried out in the laboratory using a simulation environment. In order to ensure that this simulation environment reproduces real situations that occur in a public transport company, we have collaborated with the public passenger transport company Global Salcai-Utinsa of the island of Gran Canaria. This company allowed us to install in the on-board systems of a set of vehicles belonging to their fleet a programme that continuously recorded data regarding: (a) the GPS location of the vehicle (latitude, longitude, height, speed, date and time of the measurements taken and quality of the measurement), and (b) the events that took place in the vehicle (starting off on the routes, end of routes, passing stops, passengers boarding and disembarking and technical alerts). This programme obtained the information described by periodically accessing the vehicle's on-board computer. Using the average speed of the vehicle as a criterion, these routes were classified into three types: fast routes, intermediate routes and slow routes. The reason for taking the vehicle's speed as a criterion is because this parameter allows us to establish different system response time requirements when passing a point of interest. Once a route belonging to each of the three categories had been chosen, the recording programme was run for a period of four months in the vehicles that operated the chosen routes. The recording programme organised the data obtained in three files: one for location, one for events related to the routes monitored and the last for technical alerts. These three files were periodically transmitted by the vehicle's mobile communications system to the laboratory and were processed in order to incorporate the data into the simulation system's databases. The laboratory simulation consisted of executing a programme that interrogated the data base in order to generate the information provided (locations and events on a route over a period of time) as input data for the simulator. Thus, the tourist information system developed could be tested using data representing real situations that had occurred in the vehicles. The mobile devices used during the tests were mid-to-low range mobile telephones. Specifically, mobiles with the Symbian operating system were used, with the only requirement being that they had Bluetooth for wireless communication. With this set-up in the laboratory, the tests were carried out simulating two scenarios. In the first, the system was validated during a route; this scenario is useful for the validation of the system's response times, which have to be appropriate bearing in mind the different speeds at which the vehicles go past the various points of interest. The second scenario aimed to simulate the situation in a bus station, where a number of different information systems are available and the client connects to a specific system. This second scenario is useful for the validation of the different times needed by the client applications to detect the information service and subscribe to it. As Bluetooth technology was used, the tests used in the analysis of local communications have been developed to obtain two key times: the time it takes for the Client Application to discover the service, once the connection time with the Information Server has been discovered, and secondly, the time it takes to transfer the data that trigger the multimedia reproduction of contents by the Client Application. In order to analyze the implications of the number of Bluetooth devices using the services, a variable number of Bluetooth devices participated in the tests, specifically from 2 to 8, which is the maximum number that a Bluetooth network can manage with just one Bluetooth station server (piconet). The tests have determined that the time it takes to discover the service varies between 10 and 20 seconds depending on the number of Bluetooth devices connected to the network. Once the service has been discovered, the connection time is short in comparison with the previous time and it varies very little, between 3 and 5 seconds, regardless of the number of Bluetooth devices connected to the network. The transfer time necessarily depends on the amount of data to be transferred; specifically transfers of 1 Kbyte took about 2 seconds, 79 Kbytes, about 15 seconds and 300 Kbytes around 2 minutes. Response time was used to measure the level of reliability of the information provided to the passenger. In this context, response time is defined as the time that passes from the moment a relevant event to be communicated to the passenger occurs, such as the vehicle passing a point of interest on its route, until the event is communicated using the preferences established by the passenger in his/her Client Application; this could be, for example, a voice message or a text message. The tests carried out have consisted of communicating different types of events with response time restrictions and different amounts of data to be transmitted. In terms of the transfer terms described above and the different lengths of time taken by the vehicle to go from one interest point (for example a stop) to the next, it was decided to limit the amount of data transmitted to communicate an event to a maximum of 1 Kbyte. Conclusions The development of information and communications technologies, especially in contexts of mobility, has allowed us to incorporate new functionalities into the information system for public transport and specifically into passenger information services. A special case of this kind of system has been described in this paper. This system is based on the ubiquitous and ambient intelligence paradigms and the main goal it achieves is that of facilitating access to public transport and providing information of interest to tourists. Unlike on-route public transport passenger assistants, this system is capable of informing about any point of interest on the route taken by the vehicle, adapting the information to the passenger's preferences, such as language and information type (audio, graphic or text), all of which are supported by the system. The structure of the system comprises three main elements: the Client Application, the on-board Information Server and a repository from which the user can obtain the Client Application. The Client Application runs on the mobile terminal of the user, the on-board Information Server executes on a computer that is a part of the vehicle's infrastructure and the repository is located in a Web site that is freely accessible by users. From a technological point of view, the system presents the following innovative characteristics: firstly, its structure is distributed, which means that the tourist information servers are deployed in the fleet of public transport vehicles and not in just one server; secondly, as a consequence of the first characteristic, passengers access the information using local communication infrastructure and those available in the vehicles, specifically in this case Bluetooth as communication technology and GPS; and lastly, the passengers' mobiles needed in order to access the information are medium-to-low range, the kind of devices that are currently very popular. In order to improve the system and as future work several lines of research must be developed. One is the use of Android technology to develop the Client Application. A second line is to improve the system architecture introducing a new element, Service Broker, to facilitate the information service searches executed by the Client Applications, and finally, the use of ZigBee technology for communications with the passengers' mobile terminals.
v3-fos-license
2022-01-28T16:34:38.432Z
2022-01-01T00:00:00.000
246346996
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2022/07/e3sconf_aiwest-dr2021_05008.pdf", "pdf_hash": "bd015a4685ea1cdeef03c5ea16210da79c59626b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1349", "s2fieldsofstudy": [ "Sociology" ], "sha1": "2d9d7220b61f242a44bd2f3cc45e5012b60ba4d4", "year": 2022 }
pes2o/s2orc
The fading of disaster memory in Pulau Sebesi: A historical construction The island has been impacted by volcanic eruptions which led to tsunami in a different scale, where in 1883 claimed 36000 lives, subsequently in 2018 took one victim. Some island communities succeed in pertaining memory of tsunami through oral tradition, namely Simeulue, differently Sebesi communities fail to maintain its memory on tsunami. The gap of 138 years seems to bury the memory of tsunami in Sebesi island. This paper aims to explore why the Sebesi communities fail to maintain the disaster memory. To build an understanding the way in which the Sebesi forgetting their past disaster, this paper uses longue durée approach, oral history framework and archival studies to analyses the structures—both environmental structures and socio-politic and cultural structures— that play roles in the disappearance of disaster memory. The study revealed that none of survivor during its catastrophic tsunami in 1883 and repopulation of this island occurred only after 1940s. This resulted to the formation of community without disaster memory. Only after the 2018 tsunami, the community of Sebesi Island began to aware that there are hazards among their environments. Uncovering the fading of disaster memory in Pulau Sebesi elucidates lessons to pursue resilient development trajectory on the island. Introduction "…every full moon I feel frightened Remember the high waves chasing our vessel. If only it rolled over our vessel, I may be death by then …" (Bapak Jefry-eyewitness and survivor of 2018 tsunami/ a staff of BKSDA Lampung). The tsunami disaster that occurred on December 22, 2018, at around 21:27 WIB, exhibited an important value for the people of Sebesi Island. The disaster that stricken during the full moon killed 430 people in the Pandeglang area and the coast of South Lampung [1]. It became a milestone for a disaster awareness of the people in Sebesi on the dangers of their surrounding environment. PVMBG (Volcanology and Geological Disaster Mitigation Center) has been monitoring this mountain since early December 2018. The agency detected an increase in eruption activity of Anak Krakatau on Friday, December 21, 2018, with a throw height reaching 738 meters above sea level. Therefore, Anak Krakatau received an alert status from this institution. In line with this, BMKG (Meteorological, Climatological, and Geophysical Agency) also monitored the threat of high waves on December 22 and gave warnings to the public [2]. Unwittingly, continuous eruption since early For the people of Sebesi, the 2018 tsunami disaster appeared to be a new experience, although history recorded a similar disaster with larger scale had hit this island. The eruption of Mount Krakatau on August 27, 1883 was so intense that it generated a tsunami and devastated the coasts of Pandeglang, South Lampung, and Sebesi Island, and even affected the world [6]. The span of 138 years has succeeded in fading the disaster memory in the minds of Sebesians. Although, on other affected area like in Labuan sub-district, Pandenglang, the communities continue to commemorate the calamity by convening haul kalembak, a tradition where people gathered in Labuhan Great Mosque to pray for the victims of Krakatau 1883. In addition, the people of Simeulue island were able to preserve the experience of the tsunami disaster that had befallen them in 1907. Through oral tradition, namely lullaby and storytelling, the experience was recorded and continued to be conveyed to the younger generation of this island community. The tradition, known as smong, proved to save the people of Simeulue island from the great tsunami in 2004 [7].From a historical perspective, Bankoff [8] explains that disasters can be seen in two trajectories, namely 'natural'-in this case the forces of nature-on the one hand, and society on the other. When these two trajectories meet at a certain place and time, a catastrophic event occurs. At that time a new experience is formed for the survivors. The experience of individual will form a shared memory in a particular community. This is in line with Halbwach's explanation which states that collective experiences could form into 'collective memory.' He explains that memory refers to individual processes, but it has a role in constructing "collective memory" within the community by forming specific behaviour limited to specific communities, location, certain area, and time [9]. He concluded that a shared framework for a memory is a result or combination of individual's memory from people in the same community [10]. Based on this concept, the paper tries to reconstruct the failure of the Sebesi community in preserving the memory of tsunami Krakatau in 1883. Furthermore, new knowledge that emerged after the 2018 tsunami as a form of new behaviour in the Sebesi community will also be analysed. Methodology and research sites This paper aims to reconstruct the fading of tsunami 1883 memory on the people of Sebesi island. Further, it also analyses the new knowledge arise as an impact of the recent tsunami. To have broader understanding about collective memories and how people maintained it, this paper applies limited comparative case study with local knowledge of Smong from Simeulue of Aceh [7], [11], [10]. It is the knowledge that attracted many scholars to study and translated into policy implementation. This research employed longue durée approach with archival study and oral history. Longue durée is a structural analysis of historical reality without leaving the aspect of time in it, which presupposes structural change itself, although it runs very slowly. It is not always referring to temporalities, but rather a way of seeing reality that is formed slowly over a long period, which then becomes a precondition for the problems that arise on the surface. This approach enables to find the relationship between agency and environment over the long period of time. The slowmoving time that running in a long process becomes the precondition of fast-moving time as conjuncture, which later comes to the surface as an event [12]. Accordingly, the change of community structure as the result of natural structure recorded in the memory could be investigated throughout the approach.Primary data and secondary information presented in this paper were gathered during irregular visits to Sebesi island for several weeks in 2019, 2020, and 2021. Data were gathered through archival studies, literature reviews, and interviews with older individuals and tsunami survivors that they remember, in search of stories about disasters in this island. Secondary data were gathered from the Internet and published reports. Academic papers, as well as relevant documents of governmental and non-governmental organizations, were studied and evaluated. Some relevant reviews of selected literature and published stories about earthquakes and tsunamis were also analysed and elaborated in this paper. In addition, a field notes of a project conducted by Indonesian Institute of Sciences in some coastal areas of Banten in 2015 related to Risk Culture is using in this paper to enrich knowledge about people and disaster in coastal area. Sebesi Island is in Lampung Bay with coordinates at the position of 05 o 055'37.43"-05 o 058'44.48" South Latitude and 105 o 027'30.50"-105 o 030'47.54" East Longitude. It has an area of 2620 hectares, currently there are 787 families or about 2795 people recorded living on the island. For livelihood, most of the people in Sebesi work as farmers as much as 75%, and fishermen as much as 20%, and 5% consists of government employees, traders, and tourism actors. Meanwhile, the dominant ethnic group on this island is Banten (Javanese-Serang or Jaseng), while the second largest ethnic group is South Lampung, and several other ethnic groups such as Sundanese, Batak, Nusatenggara, Bugis, and Padang [13]. Sebesi island is bordered by Lampung Bay and Sebuku Island in the north, the Indian Ocean in the west, the Krakatau Island complex-including Mount Anak Krakatau-in the south, and the Sunda Strait in the east. Based on its administration, Sebesi island is included in the administrative area of Tejang Village, Rajabasa District, South Lampung Regency. In Tejang Village, there are four hamlets, namely Hamlet I Bangunan, Hamlet II Inpres, Hamlet III Regahan Lada, and Hamlet IV Segenom [14]. Although it is obscure in the Dutch archives, the Sebesi island's fertility has lured the people to settle, both those who came from around the Rajabasa and Kaimbang areas, as well as those who came from Banten. Long before Krakatau erupted, this island had become a stopover for Dutch ships before entering the Banten waters. If the English ships with their weapons and merchandise were to land on Legoendi Island, then the Dutch ships, one of which was commanded by Pieter de Carpentier in 1624, had a place on Sebesi Island to fill water and supplies, repair the ship, and wait for permission to enter the Port of Banten [15]. After the eruption of Krakatau, Sebesi island became uninhabited. The first visit to Sebesi was 14 days after the eruption. It was conducted by Berg from Javabode in 1883 described that this island buried with 6 meters thick mudbank and giant pumice stone along the shore [16], [17], [18]. Another visit on May 21, 1884, explained that living creature was only consisted of a few banana treetops, tall weed, and bushes and full of desolation. The atmosphere of the island was only a stretch of white sand and bones-which was unknown whether they were animal or human bones [19]. After the eruption of Krakatau, the ownership of the Sebesi island was on dispute due to the death of all the descendants of Pangeran Singa Brata and all existing evidence of ownership. Pangeran Minak Poetra claimed to be the younger brother of the remaining Pangeran Singa Brata and appealed for these two islands. According to the customary law, since all the descendants of Singa Brata have become victims of Krakatau, hence Minak Poetra, as a younger brother, can be appointed as head of the Raja Basa clan or penyimbang. He also appointed himself as Bandar, that traditionally with this position has power over these islands. The appointment of Minak Poetra's also received the blessing from the Sultan of Banten in the 1890s [20]. Furthermore, the Dutch also approved Minak Poetra as a penyimbang as well as bandar of Raja Basa. This is known from his action in giving assistance to the Dutch to cease the rebellion in Cilegon-Banten in 1888. Furthermore, the Dutch approval was also seen as an effort to cut off the descendants of Pangeran Singa Brata. He was known as important ally of Radin Inten II's in struggling against the Dutch. In the hands of Minak Poetra, Sebuku island was initially leased to a Lanberg firm in 1889 to grow pepper and timber, while Sebesi was left to remain. Subsequently, Sebesi and Sebuku Islands were sold to the trader named Hadji Djamaloedin in 1896. He and Pangeran Minak Poetra were awarded a gold medal from the Dutch government in 1888 for helping to stop the riots in Cilegon. Sebesi island was purchased for f.7000 while Sebuku was priced at f.3000. under Djamaloedin, these two islands obtained the status of individueel bezight recht (private property/goods) through the Governor General's besluit in 1906 [21], [22]. Plantation As if a blessing in disguise from Krakatau eruption 1883, the land on Sebesi became very fertile so that the agricultural products of this island were abundant. A report on a trip to Sebesi island in the 1920s stated that the island was filled with thriving coconut trees planted by Hj Djamaloedin and his workers. An archive states that a ship loaded with chickens and goats was sent from Kalianda port to Sebesi island by Hj. Djamaloedin in the early 1900s and was followed by the dispatch of workers from Banten to Sebesi [23]. Entering the 1940s, the world economic conditions declined and were overshadowed by the threat of the second world war. This influenced the economic condition on Sebesi. Coconut prices have fallen quite badly. In the era of the 1950s, coconut prices improved again and gave decent results. This evidenced by several supervisor who went to hajj from the sales of coconuts at that time [24]. As the coconut continues to support Sebesi's economy, bananas have also become an important commodity on this island. Bananas became staples food for the community since it always available in the island. In line with this, this island once became a shot of clove cultivation in 1990s. Some farmers change their gardens by planting cloves. This commodity was destroyed during the 1997 monetary storm, and residents began destroying clove trees and replacing them again with coconut and bananas. In 2004, the government came up with a cocoa planting program. Unfortunately, without market clarity, the results of chocolate in Sebesi are not as sweet as expected. Today, society continues to focus on coconut-which is sold in the form of granules for one thousand rupiah per grain-and bananas, as the main commodities to support their lives [25]. Settlement The development of settlements on this island is closely related to plantation activities. The first wave was in the 1920s, where most of them were farmers from the Canti, Way Muli and surrounding areas also came to ask Hadji Djamaloedin's permission to open agricultural land. Later, the waves of workers that brought by the land owner also opted to settle in the island during 1930s. They were allowed to farm with a profit-sharing system, where 2/3 is owned by the farmer and 1/3 is given to the owner of the island. The results that are shared were not in the form of commodities, in this case coconut, but 1/3 of the total trees grown by the farmer [26]. Like their fellow farmers, these workers also applied for approval to work on a piece of land with profit-sharing system. The requests were granted by Hadji Djamaloedin and this system was continued by his son, Saleh Ali. Since 1937, Saleh Ali issued a letter of agreement with several laborers who asked to become cultivators with a profit-sharing system. Entering the 1940s, these workers brought their families with them, and began to build an umbul (like a small hamlet) consisting of several houses from fellow workers. In this era, the island of Sebesi returned to a settlement for the workers community [27]. In other part, the waters where Mount Krakatau used to stand showed its seismic and volcanic activities in 1927. The roar and large bubbles rise to the surface, erupted and release ash and sulfuric gas. These bubbles were an indication that it was an effort of a new mountain to build itself. On January 26, 1928, a pile of ash and solid rock began to emerge from the sea in a shape of a curve. This was followed by explosive and earthquake activities which continued until 1929. The solid rock began to grow into a new black plain and continued to expand to form an island. It was a Russian geophysicist, W.A Petroeschevsky, who first noticed this condition and observed it closely from Panjang Island. He named this nascent plain as 'Child of Krakatau.' The island began to show its stability on August 11, 1930, and further solid growth in the 1931 [6]. In response to this, the colonial government carried out detailed observation on the growth of this newly born mountain. A monitoring post in Kalianda was also obliged to report to the central government in Buitenzorg about daily condition of the mountain. The report lasted from 1928 to 1931 [28]. The era of the 1950s to the 1960s was a time of immigrant influx to Sebesi. The first migrant began to invite their relatives and neighbours migrating to Sebesi. Umbul began to turn into a hamlet in the 1960s, as new houses grew. During this period, there were three main hamlets, namely Tejang hamlet, Segenom hamlet, and Regahan Lada hamlet with a population of around 500 people. In 1980, a village administrative was initiated in Sebesi and named as Desa Tejang, which included in the Kalianda sub-district. In 1985, with the help of the military, roads were built and connected from one hamlet to another. Furthermore, the port was repaired and enlarged as well as a scheduled public transport ship was formed. The local government also built supporting facilities, such as elementary school building and office for village staff. To fulfil the need of health services to its 800 inhabitants, the head village asked local government to send health workers to the island. It was only in 1992 that the Health Center was built. Subsequently, one midwife and one mantri were stationed there [29]. The existence of Anak Krakatau in 1930, which is included as the most active volcano in Indonesia and has erupted 40 times over 85 years, does not seem to be a threat to the community. They live side by side in peace with nature. The volcano activities of Anak Krakatau, such as rumbling sounds, small eruptions that emit smoke, the smell of sulfuric, and volcanic ash that splashed Sebesi, did not distract them. The routine activities of Sebesians take part in fading their memory of past disaster [30]. The people of Sebesi consider the volcanic activities of Anak Krakatau as harmless and routine phenomena hence they are courageous to keep on living there. The risk of explosion does not reflect as hazard that could create disaster. Tourism With an area of less than 3000 Hectare, Sebesi Island is included in the category of a small island. The significant characteristics of an island is its smallness and remoteness, which triggers vulnerability to natural hazards [31]. Furthermore, economically the main characteristics of islands are market, human and non-human resources limitation, hence their economic activities are less diversified and closely related to water, such as fisheries, interisland trade, and tourism. Similarly, Sebesi Island also rely on agriculture, fishing, and tourism [32]. Render to its natural environments, Sebesi island appointed as one of the leading marine tourism destinations by the South Lampung Regency. White sandy beach with its beautiful scenery and Mount Anak Krakatau on its direct opposite become main reason why Sebesi is promoted as tourism destination. The existence of Anak Krakatau proved to be a magnet for the development of tourism activities in Sebesi island. In 2008, the provincial government designated Sebesi Island as one of the Tourism Destination Objects (ODTW) in South Lampung Regency by offering marine ecotourism and adventure nature ecotourism. To support its tourism revenue, the Lampung government is also promoting an annual event entitled the Krakatau Festival. Confirmed through the Decree of the Governor of Lampung Number G/126/Diparda/1991, the Krakatau Festival was inaugurated as annual tourism activity for the Lampung Region. Various cultural events, fairs, and parades invigorate this activity, with the main pursuit was visiting Mount Anak Krakatau. This involved Sebesi island since the participants would stay overnight there and visit Mount Anak Krakatau in the next morning [33]. Source: Evadianti, 2017 [34] As a part of their lives, Anak Krakatau is also considered a blessing from God. According to Ibu Rumanah, Ibu Jamilah and Ibu Maemunah, at a grocery shop where they usually spend their afternoons, explained that Sebesi was their hope to fulfil their necessitates because Sebesi is a 'home' for them. Entering the year 2008 onwards, a wave of tourists both from local and foreign flooded Sebesi, especially when provincial government conducted Festival Krakatau. Accordingly, visitors who stayed at Sebesi could reach to a thousand people. Even at the time of the tsunami, several houses in Dusun I Bangunan were receiving about 30 guests from Jakarta [35]. Based on its historical background, the decision to select Sebesi Island as a tourist spot requires careful attention and planning from the government. Exposing Sebesi into tourist destination would lead people to gather round, hence preceded to the increasing vulnerability when a disaster occurs. Consequently, it requires awareness and preparedness in management both from local people, the government, and relevant stakeholders. The 2018 tsunami: a memory remind It is not easy for Jefrey (35 years) to live life as prior the tsunami since he was witnessing the enormity of the tsunami waves caused by the flank collapse of Anak Krakatau. He still clearly remembered that evening, he and 7 colleagues had been in the post for a week, assigned to control Anak Krakatau. He and four of his colleagues patrolled around the island, while two others stationed at the safeguard post on the island. The aim was to prevent the entry of fishermen to the island because of its alert status due to its rising volcanic activities. It seems that the common people are still not aware of the dangers of this mountain, so they still saw three fishing boats landing on the island. Indeed, this island is rich in crabs accordingly entice many fishermen try their luck hunting crabs on this island despite the dangers that lurk due to the lava and sulfuric fumes [36]. As the night was getting late, they saw 4 fishing boats docked on the island. They tried to chase these boats away by shining flashlights at the fishermen. From the distance, Jefry could see that these fishermen were lighting their stove. The fishermen looked relaxed and not at all frightened although Anak Krakatau erupted continually at that night. It was a full moon, and the sky was so clear that the sparks from Anak Krakatau could clearly be seen. According to Jeffry, one unusual view from Anak Krakatau that night was the black smoke from the fire that didn't leave the peak, where usually after the smoke came out it would immediately evaporate away from the peak. Furthermore, they saw the peak was in reddish and a flash like a path of fire on the seabed. Finally, the BKSDA patrol boat approached, not only dispel the fishermen but also to summon their colleagues who were on guard at the post. At that time, according to Jefry, the Captain felt something was wrong with Anak Krakatau, so he ordered all his crew to leave the guard post immediately and return to Sebesi although they supposed to return on the following [37]. It was at about 9 PM when the BKSDA ship slowly left the island, after driving away the fishermen. Unfortunately, of the four ships that docked, only one ship rushed to leave the island, while three ships stayed behind with the excuse of having dinner and would leave the island soon. The ship was just move further for no more than ten minutes, about 500 meters away from the island, when there was a loud bang, followed by rumbling and lightning that erupted from the peak of Anak Krakatau and debris that almost reached their ship. The debris rolled up three fishing boats that were docked and created very high waves. The ship's engine did not turn off but for some reason it could not run faster. The ship was running very slowly although there were high waves behind them, also to their right and left. Somehow it happened, the ship was moving very slowly but the waves were slowly disappeared without hitting the ship. Accompanied by the continuous azan from one of their comrades, the ship slowly made its way towards Sebesi. At that time, there was no signal so that they failed to contact their colleagues on Sebesi Island to inform the condition of Anak Krakatau and ask for help to pick them up. After an hour away from Anak Krakatau headed to Sebesi, they started getting signals. One of Jefry's colleagues' telephones rang and informed them that there was a tsunami in Sebesi and asked for help from the BKSDA, whereas they were just about to ask for help from Sebesi's people. Hearing this news, Jefry's mind became increasingly frantic, remembering his wife and child who lived in Sebesi. Jefri is a native born in Sebesi, whose father is from West Nusa Tenggara and a mother is native born in Sebesi. They finally arrived safely in Sebesi. The trip, which usually only took two hours from Anak Krakatau to Sebesi, took nearly four hours. Arriving at Sebesi, they saw the Port was slightly destroyed by the tsunami and the atmosphere of the village was deserted. Upon information from his brother, Jefry immediately headed to Mount Sebesi, followed his wife and children and gathered with his extended family. The horror was not over yet because on Mount Sebesi, the roar sound of Anak Krakatau along with heavy rain continuously until morning came. The sound and rain are finally ceased as the morning came. It is not easy for Jefri to forget this disastrous experience. Horrors often come every time he received assignment to monitor Anak Krakatau. In addition, every full moon after the tsunami, he still feels the fear of high waves. However, life must go on, and giving up his job at BKSDA is also unrealistic hence he decided to fight his fear by surrender to God. Regarding the tsunami, when referring to Krakatau in 1883, Jefry only remembers it vaguely. He never paid any attention to the story. Honestly, Jefry admits that he forgot about the incident, although he may have heard the story either from his parents or grandfather or even from the media or at school. For Jefry, the 2018 tsunami was the new disaster knowledge that was formed from personal experience [38]. While Jefry witnessed the eruption of Anak Krakatau, whose avalanche triggered a tsunami, people in several areas on Sebesi Island also experienced the horror of being hit by the tsunami waves. One of them is Nenek Kani (65 years old), a resident of Regahan Lada. When people screamed for the tsunami, nenek was still in her house, trying to get some clothes and some basic food. At that time, the first wave began to arrive which was more like a tidal wave that entered her house. Noticing water that enter the house, Nenek Kani, who lived alone, immediately left the house and walked towards Mount Sebesi, following the advice of the village head and the people around her [39]. On the way to the mountain, Nenek Kani saw the second wave that rolled up high and hauling 3 boats that were docked to the mainland and hit her house and her neighbours. With horror, Nenek Kani walk immediately toward the mountain. Due to her age, nenek Kani could not reach further to the top, hence she decided to stop at a high enough place with those who were not able to walk again. Nenek Kani subservient to God if indeed the waves would reach and rolled her in her shelter. Fortunately, the third wave was not as powerful as the second one, so her shelter was safe. It was the second wave that caused many houses destroyed in Regahan Lada hamlet. In the following morning, Nenek Kani returned to her house. At that time, she saw so much garbage that entered the house and the right-side wall of her house was collapsed by the tsunami. Several houses to the right of her house were badly destroyed so could no longer be lived in. She preferred to return to the shelter for another night as she was afraid of another tsunami. Nenek Kani, who came to Sebesi in 1955, remembered that so long as she lives on Sebesi, this was her first experience, where it was the most terrifying disaster. Initially, she had never heard the word tsunami, just followed the words that people often said without understanding it. However, after the incident, she realized that the wave was called as tsunami. For nenek Kani, she never had experienced or heard any stories about the catastrophic disaster in Sebesi, including stories from her parents or grandparents. The tragedy of Krakatau in 1883 appears to have never recorded in the memory of this 65-year-old grandmother. Furthermore, Anak Krakatau to nenek Kani was nothing more than a volcano near the island of Sebesi. Recalled to the 2018 tsunami, the people of Sebesi unlikely to believe that their island would strike by such a severe disaster. Throughout his life on Sebesi Island, Mr. Mochtar (currently around 70 years old) never imagined that he would suffer from the tsunami disaster. For him, Mount Anak Krakatau has become like God's creatures who live side by side with them. However, Mr. Mochtar is aware that the island they occupy has indeed experienced total destruction after the eruption of Krakatau in 1883. He still remembers the story told by his dato (grandfather) regarding the enormity of the eruption [40]. Andi (21 years old) and Hasmy (31 years old), like most other Sebesian youths, were relaxing at Tejang Pier. It was a full moon at that time, since it was Saturday night, many people were still relaxing around the pier, including the Head of the Dusun Bangunan, Mr. Achmad Kurtubi (40 years old), who was looking at the pier while relaxing. It was around 9PM, and suddenly the sea water receded for about ten meters, many small ships that were docked were immediately sunk because of the dry water. At that time, many fish were floundering on the dry sand, but the people on the pier did not dare to go down. The people in pier found this as a strange phenomenon and began to mumble about the tsunami in Aceh which began with low tide. It did not clear who was first shouted tsunami, but the people on the pier started screaming and ran to their hamlets to save themselves. Some people immediately fled to Mount Sebesi, but many also ran to their homes to save their families while telling other people. Instantly people panicked and flocked to Mount Sebesi to save themselves from the tsunami [41]. At that time, a group of tourists from Jakarta, about 30 people, were staying at Mr Hayun's homestay. They were very enthusiastic to see Anak Krakatau who was active because its reddish feature was an exciting phenomenon. After dinner, they plan to visit Gubuk Seng, an area where people can see the mountain clearly and where a monitoring tower belong to the South Lampung Regional Government stood. Luckily, these group had not yet left for the area due to the delay of dinner schedule, because the Gubuk Seng area was the worst place hit by the tsunami. These tourists together with local people immediately fled to Mount Sebesi and spent their night there [42]. Looking back on the tsunami disaster, Andi believes that it was the worst he has ever experienced, while he does not retain the story of Krakatau 1883 at all. As he remembers, neither his parents nor his grandparents ever told stories related to the 1883 tsunami disaster. It is rare for young Sebesi children to hear stories from their parents because their parents are busy with their plantation in the middle of the forest. In line with Andi, Hasmy also stated that he did not remember the story of Krakatau in 1883, except it was vaguely obtained through television or from social media. As far as Hasmy remembers, Hasmy's parents and grandparents never share stories to him, including about the 1883 Krakatau disaster. Differently, Kurtubi remembered the 1883 Krakatau story which he got from conversations with the elders in Sebesi, but the story was blurred due to the absence of a major disaster befall Sebesi. Memory is an individual process whose abilities are limited. As an individual, a person is capable to remember the information obtained as well as to forget something he got earlier. Forgetting is the process of failing to recall existing information. Forgetting is also described as the loss of the ability to recall or reproduce previously learned knowledge. When the stored information is getting weaker, it will be difficult to retrieve it from storage, and it tends to get lost over time if the information is not used. This is called as the Decay theory. This is in line with Ebbinghaus' research which explains that there is a close relationship between forgetting and time. Important factors in forgetting are how the information obtained and how often the information repeatedly used. The less it is used, the more the memory fades and the longer the span of time, the more likely it is to be forgotten [43]. This is reflected to people in Sebesi where the 2018 tsunami believed as their first experience, although history records that a same type of disaster in a much larger scale have stricken. Some people, especially the older generation, still remember the story of the 1883 Krakatau eruption that triggered the tsunami, especially from the story of their dato or grandmother. However, the disaster gap, the rare occurrence of severe hazards during more than a century, is significant element in explaining the loss of disaster memory. Most of the other people forgot whether they had received stories or information related to the incident or not. However, for the younger generation of Sebesi, the information about the 1883 Krakatau disaster is mostly received either from schools or the media. Unfortunately, this information appeared easily replace by other information due to the infrequent repetition hence it was fading away. Another reason why this devastated disaster was blurred among people in Sebesi is the fading of oral tradition practice from older to the current generation, either in the form of storytelling, lullaby, recite a poetry, or narrating legend. There are some efforts from tourism activists to create tell-tale tradition about Krakatau, but only to attract incoming tourists. Usually at night when these travellers get together, the guide would start to recount the story where this received positive response from them. Regrettably, this practice ceased to perform to local communities resulting in a total lack of civilian disaster awareness and preparedness, also diminishing cultural value in society. The 2018 tsunami played important role for the Sebesians in recurring disaster memory. The disaster experience for the Sebesians led to the formation of specific attitude or behaviour in looking at their natural environment. The result of and combination on individual experience on 2018 tsunami initiate a collective memory for the communities since it confines and binds their most intimate remembrances to each other. It reminds them that they live in a hazardous environment. Therefore, this memory encourages the awareness attitude within the community. Currently, the Sebesian arranged themselves when they smell a strong sulfuric smell, or a sudden rain of ash, or a roar sound, as a sign that something is happening to Anak Krakatau. Furthermore, they also pay attention to the waves, its environs and believed that the safest place for evacuation is Mount Sebesi. Accordingly, what occurred in Sebesi is correspond to the concept of Halbwach's collective memory. Subsequently, preserving this shared memory, either in the form of storytelling, poetry recite, or stories, as well as children's songs are important ways to transmitting messages, to bring communities together, and maintain the historical ties. Thus, memories of the 2018 tsunami can last for several generations. The story is about a very devastated event of Tsunami on 4 January 1907, which hit some areas of the island. The report from colonial archive mentioned that some areas in West Simeulue was damaged and many victims reported. However, due to the traumatic event of tsunami, survivors of tsunami were telling stories over generation and put messages that if there is a strong earthquake and retreating water on the shore, people should run to hills for evacuation purposes, and avoid the shore to collect fish, "…bila ada gempa kuat, laut surut maka larilah ke bukit dan jangan ke pantai untuk mengambil ikan…". That story has been transferred over generations, not only among native people of Simeulue, but also to migrants. During the Indian Ocean Tsunami of December 26, 2004, the tsunami was hit Simeulue with the similar preceeding like 4 January 1907. There was evidence that many people warned again the smong stories and they successfully evacuated to the safe place. There were 7 people killed on the processes of evacuation, mostly because of panic and riding motorbike to evacuation areas. This event has made many peoples believed that knowledge of Smong safe people of Simeulue from tsunami. However, there are different version related to how smong story transferred from one generation to other. The study sponsored by UNESCO conducted by LIPI in 2005 stated that smong stories transferred through story telling from older generation to their children, the grandchildren and so on. Usually, the time of storytelling related to its situation, mainly while there is natural hazard such as earthquake, floods and other. The elders would tell stories that hazard is not so dangerous compared to tsunami 1907. Another version says that elders or parents just telling stories without any particular situation, just tell story whenever they have chance to meet. Other version about transfer story is through local folklore namely singing buai-buai (traditional lullaby) and narrating nandong (poem) [10], [11]. There are varies of lyrics depend on dialect of local dialect in Simeulue. However, those all shown that smong is part of local knowledge in Simuelue. Report by UNESCO (Yogaswara and Yualianto, 2005)) stated that why smong stories has been existed over generation until the 2004's tsunami happened, because of several factors, namely (1) frequently of earthquake and other natural hazard that remind people about tsunami (smong) 1907 (2) survivors of tsunami 1907 that willing to telling stories and (3) kinship system and cultural bound among people of Simeulue and (4) material culture (natural and human made) that become 'living monument' remembering people about tsunami 1907. In addition, geographical features of Simeulue island that dominated of coastal and hilly areas provide people to monitor water situation after earthquake and easy access for evacuation route. The tsunami of December 2004 that hit Simeulue island creating new knowledge about smong as local knowledge that safe people. The smong story has been researched by many agencies, including academic community, government agencies, international funding, and the like. The story of Smong transformed to different medium of expression, including digital tools and purposes for millennial generation. Indeed, the smong story be part of policy making of qanun related to disaster education, The story of Smong will always in mind of the people in Simeulue and Aceh. Conclusion History records the rise and fall of the Sebesi island due to a catastrophic disaster. The eruption of Krakatau on August 27 th ,1883 has led to enormous tsunami and took life of all the people in Sebesi. Based on the report from the officers who visited Sebesi after the eruption, it is unlikely to have survivors of this disaster. Therefore, the people living today are not those who witnessed or survived from the severe disaster. The current society is a migrant community who arrived in Sebesi to seek their chance from the development of agriculture. The fertile plantations cannot be separated from Krakatau's "blessing" because the volcanic ash that buried Sebesi has proven to make the land on this island very fertile. The birth of Anak Krakatau since 1930 were a new challenge for newly established settlement in Sebesi. However, the economic reason was their main motivation to stay, greater than the threat of an unknown danger. Over time, Sebesi has become home to more than two thousand people. The growing of Anak Krakatau brought piece of good fortune to them, namely the development of tourism, which is economically profitable. The threat of danger viewed as just a figment since Anak Krakatau is so calm and pampers the Sebesians with the beauty of its evolution and its abundant marine life. The rare occurrence of severe disaster from post Krakatau in 1883 to 2017 has caused people in Sebesi unaware to their 'risky' environment. The continuous eruption of Anak Krakatau, from booms, rain of ash, to the strong smell of sulfuric acid are believed as normal and harmless by the Sebesians. The disaster gap, different perceptions in looking at Anak Krakatau, as well as social, economic, and cultural life, are significant element in explaining the loss of disaster information they learnt from elders. Additionally, the absence of oral tradition practices, in the form of storytelling, lullaby, poetry and others led to the loss of disasters memories in current generations. Even though local government is regularly celebrating annual commemoration of Krakatau eruption (Festival Krakatau), but this does not generate awareness and disaster risk reduction of the communities in Sebesi increase. It seems there is memory distance between Sebesi people with people in mainland Lampung area and Banten coast area. However, tsunami-related knowledge among young people in Sebesi mostly been formed by learning at school and from the media. It was only after the Aceh Tsunami in 2004 that the public became familiar with the word tsunami due to the very intensive media coverage. In the past, people were more familiar with rajuh or known as rising tide into the land. Subsequently, the condition in Sebesi confirmed with the concept of cultural memory where the media plays an important role in shaping memories and further regulating people's behaviour. The power of nature once again revealed when Anak Krakatau initiate a tsunami that strike Sebesi island, the coast of South Lampung, as well as Anyer and Pandeglang on December 22, 2018. The disastrous has changes the way in which the Sebesians perceived Anak Krakatau, from a blessing to a thread. The Sebesians are currently responsive to their surrounding environment, hence the hidden blessing of this disaster is the re-establishment of disaster memories and the revival of disaster awareness and preparedness. Lesson to learn from the Smong story of Simeulue is that a cultural preservation needs to be consider through appreciation for local people who maintain their knowledge. Local knowledge is always changing due to socio-cultural change in the community. However, recognize the local knowledge is the best way to acknowledge that local people owned their knowledge, not only scientist or the government.
v3-fos-license
2018-12-13T14:06:42.446Z
2018-12-10T00:00:00.000
54471740
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0208828&type=printable", "pdf_hash": "8b56252a61e27f6c21b25b3de93a38d8f32fc193", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1350", "s2fieldsofstudy": [ "Biology", "Medicine", "Psychology" ], "sha1": "8b56252a61e27f6c21b25b3de93a38d8f32fc193", "year": 2018 }
pes2o/s2orc
Schizophrenia-associated mt-DNA SNPs exhibit highly variable haplogroup affiliation and nuclear ancestry: Bi-genomic dependence raises major concerns for link to disease Mitochondria play a significant role in human diseases. However, disease associations with mitochondrial DNA (mtDNA) SNPs have proven difficult to replicate. An analysis of eight schizophrenia-associated mtDNA SNPs, in 23,743 Danes without a psychiatric diagnosis and 2,538 schizophrenia patients, revealed marked inter-allelic differences in mitochondrial haplogroup affiliation and nuclear ancestry. This bi-genomic dependence could entail population stratification. Only two mitochondrial SNPs, m.15043A and m.15218G, were significantly associated with schizophrenia. However, these associations disappeared when corrected for haplogroup affiliation and nuclear ancestry. The extensive bi-genomic dependence documented here is a major concern when interpreting historic, as well as designing future, mtDNA association studies. Introduction Genetic variants in mitochondrial DNA (mtDNA)-and in nuclear genes coding for mitochondrial function-have been associated with disease [1][2][3]. More than 300 variants [4,5] in mtDNA and genes involved in mitochondrial function [6] have been reported to cause mitochondrial disease, which is clinically characterized by complex metabolic, neurological, muscular and psychiatric symptoms [7,8]. SNPs (mtDNA hgs), which are evolutionarily fixed SNP sets with a characteristic geographical distribution, have been proposed as potential disease modifiers [8]. This has been reported in neurological degenerative diseases such as Alzheimer's disease [9][10][11][12] and Parkinson's disease [12][13][14], metabolic diseases and cancers [15], as well as psychiatric diseases, notably schizophrenia (SZ) and bipolar disease [16][17][18]. Association studies of mtDNA variants and disease have been difficult to replicate [8]. However, the definition of a methodological paradigm for association studies with mtDNA variants [19] implicitly assumes that mtDNA variants are independent of the nuclear genome (gDNA). In a recent Danish study on mtDNA hgs and their nuclear ancestry, we demonstrated a marked difference in nuclear ancestry between individual mtDNA hgs [20]. This means that mtDNA hgs entail population stratification also at the level of gDNA. The effect of such a stratification on disease association will depend on the admixture structure of the particular population, the population history, epidemiology and genetic epidemiology of the disease, as well as the number of persons included in the study. The extensive fine-scale heterogeneity of gDNA and significant admixture documented in the UK [21] and Europe [22] further increase the risk of spurious false positive associations, if the mtDNA/gDNA interaction is not corrected in association studies. Using DNA-array data from the Danish iPSYCH study on 2,538 SZ patients and 23,743 population controls, we show that eight mtDNA SNPs, previously associated with SZ [16][17][18]23], exhibit considerable inter-allelic differences both with respect to mtDNA hg affiliation and nuclear ancestry. This phenomenon, which we name bi-genomic dependence, affects the association between an mtDNA SNP and mtDNA, as well as gDNA, and can lead to both false negative and positive associations with disease. We demonstrate that, in this cohort, it is only possible to replicate the association results for two of the original eight SNPs, when correcting for population stratification. Both mtDNA hg affiliation and nuclear ancestry affects the strength of association. Finally, we show that none of the SNPs are associated with SZ when examined on a particular mtDNA hg background, with correction for bi-genomic dependence. As none of previous studies of mtDNA SNPs have been performed with correction for population stratification, let alone bi-genomic dependence, our results indicate that all such published associations should be considered preliminary. In principle, this conclusion should not be limited to associations with SZ. Results From a literature search of mtDNA SNPs previously associated with SZ, we identified eight that were also typed by the PsychChip, Table 1. PsychChip data from 23,743 normal Danes and 2,538 SZ patients (Detailed in S1 Table) showed that the SNPs were present in the population with frequencies varying from 0.2%-20.6%, Table 1. There was no appreciable difference in mtDNA hg or nuclear ancestry distribution between controls and SZ patients, S1 and S2 Figs. Haplogroup distribution of mtDNA SNPs The potential affiliation, based on PhyloTree, of SNPs to different mtDNA hgs is shown in Table 1, and the actual mtDNA hg distribution in the controls (not different from that of the SZ patients, S1 Fig) is shown in Fig 1. For all SNPs, there is a marked difference in the actual mtDNA hg distribution between the two alleles at the same position. Thus, when comparing persons with either of two alleles at the same mtDNA position, the comparison is between groups with widely differing mtDNA distributions. A PCA analysis of the mtDNA sequences in persons with either the A or G allele at position 15,043 is shown in Fig 2A. This analysis shows that the difference in mtDNA sequence, while subtle, indicates independent mtDNA distribution between the alleles. Nuclear ancestry of mtDNA SNPs The distribution of nuclear ancestries in control persons as a function of each mtDNA allele revealed major differences, both between different positions (inter-SNP) in the mtDNA Association between schizophrenia and mtDNA SNPs The association of each mtDNA SNP with SZ was assessed (Fig 4). In consequence of the inter-allelic differences in mtDNA hg affiliations and nuclear ancestry demonstrated above, several association analyses were performed. Five SNPs, m.1438A, m.3197C, m.3666A, m.4769A, and m.9377G showed no association with SZ, both when all persons were included and where selection was made to reduce effects of varying mtDNA and nuclear ancestry affiliations. The m.10398G SNP was marginally significantly, while m.15043A was significantly associated with reduced risk of SZ in All and All-Danish mtDNA hgs. The m.15218G was associated significantly, or borderline significantly, with a reduced risk for SZ irrespective of the grouping, Table 2, was the contraryon a fixed mtDNA hg background, none of the SNPs exhibited a significant association with SZ. Discussion Here we show the dependence of mtDNA SNPs with both mtDNA hgs and gDNA clusters due to population structure and shared demographic history of mtDNA and gDNA, we have called this relationship "bi-genomic dependence". This has the consequence that an association between a particular allele, in a specific SNP, and disease is not exclusively the result of the presence of that particular allele; since each mtDNA allele is associated with the unique distributions of both mtDNA hgs and gDNA clusters. Furthermore, bi-genomic dependence can be accounted for by including mtDNA and gDNA principle components from PCAs in association analyses-thus incorporating a bi-genomic measure of population stratification and admixture into mtDNA association analyses. Currently, such a measure of bi-genomic dependence is not incorporated into studies of mtDNA-disease-association which consider an association as evidence for a specific effect of an allele on the function of a protein or RNA coded for by the mtDNA and, consequently, as a cause of pathophysiological changes. The linkage disequilibrium between different alleles of a SNP and mtDNA hgs and sub-hgs is not surprising as hgs are defined by series of evolutionarily conserved SNPs. The particular distribution of subsets of mtDNA hgs, sub-hgs and individual SNPs that are associated with a particular allele at a specific SNP will depend on the population history and the extent and source of admixture. In most countries, and in particular in Europe, such history is very complicated and incompletely clarified. m.10398G was found associated with SZ in Han Chinese, however, when the cohort was broken down with respect to mtDNA hgs, the association disappeared [24]. This illustrates that a specific allele's mtDNA hg distribution may induce spurious association with SZ. Spurious associations between mtDNA SNPs and a particular phenotype, when restricted to persons of a specific mtDNA hg [25], may however be due to population stratification at the sub-hg level. In the iPSYCH cohort there was no association between SZ and any of the European mtDNA hgs (Data not shown).s mtDNA replicates independently of the cell cycle and without recombination, mtDNA hgs and SNPs should be independent of the gDNA, but only if the population is infinitely large and in the absence of population substructure. This is often not the case, due to geographical population substructure, recent admixture, socially and culturally defined restrictions in choice of spouse. A recent study showed that most Danish grandparents to present-day high school students chose spouses within a short distance of their birthplace [26]. This practice will, with time, lead to regionalization, and a southwestern to northeastern gradient was found [26]. Furthermore, immigrants may seek a partner from within their ethnic community. Such effects have been eliminated in some studies by restricting the participants in mtDNA association studies to persons with a three-generation presence in the population. However, it has not been documented that this is sufficient to obviate association or linkage disequilibrium between mtDNA SNPs and specific gDNA clusters. Extensive gDNA micro-scale heterogeneity has been documented in the UK [21] and Western France [27] and admixture has been an important factor in the accretion of the present-day genomic variation of Europe [22,28]. The UK study [21] showed that this is not just a result of recent demic changes; however, recent migrations may lead to widespread bi-genomic dependence. Schizophrenia is a complex syndromic disease [29] with geographically varying prevalence [30] and characterized by a markedly elevated prevalence among first and second generation immigrants [31,32], particularly among persons with dark skin moving to Nordic latitudes [33]. These epidemiological characteristics of SZ obviously increase the risk of spurious Bi-genomic dependence obscures schizophrenia-association with mtDNA SNPs associations caused by subtle admixture and bi-genomic dependence. However, it does not per se refute the mitochondrial pathogenic paradigm [34] where variation in mitochondrial function, believed to interfere with ATP production [35,36], inflammation and signaling [37,38] as well as Ca 2+ -homeostasis [39], and apoptosis [38], is considered to be of paramount importance for development of disease. Several neuroanatomical post-mortem findings in SZ brains indicate perturbed mitochondrial function [40], but such findings are difficult to distinguish from changes caused by drug treatment. The iPSYCH data are prospective and signs of immigration are apparent [20], but they also showed that the variation in ancestry differed greatly between mtDNA hgs-even within traditional European hgs, i.e. mtDNA hg U, where ancient European sub-hgs occurred together with U-sub-hgs of recent Near Eastern and Central Asian origin [20]. Thus, bi-genomic dependence is likely to be a confounder and may lead both to false positive as well as false negative associations with disease. The method of correction for bi-genomic dependence in association studies will depend on the specific mtDNA SNP examined, the population structure and history, as well as the size of the study population. If population stratification involving gDNA is inherent when performing association studies with mtDNA SNPs, it should be expected that diseases with geographically varying prevalence would be likely to find associated with specific mtDNA SNPs. The largest mtDNA association study to date [18] found mtDNA SNPs associated with ulcerative colitis, exhibiting a European North-South and East-West gradient [41], and with multiple sclerosis exhibiting a longitudinal prevalence gradient [42] and effect of immigration [43]. The same study found that the prevalence of mtDNA SNPs associated with Parkinson's disease was lower in African and Asian people [44]. Furthermore, the incidence of primary biliary cirrhosis is very high in North East England, 50% lower in the rest of England and Scandinavia, and 90% lower in the Middle East and Asia [45]. A major problem with the interpretation of mtDNA SNP variants is the difficulties associated with performing a meaningful and reproducible assessment of mitochondrial function. In vitro studies of mitochondrial function, e.g. enzymatic activity measurements of OXPHOS components in cells, tissues [46] or cybrids [47] as well as allotopic expression [15], are difficult to interpret as they also interfere with the inherent cellular control of mitochondrial function [38]. Furthermore, it should be noted, that mtDNA hgs and sub-hgs are cladistics groups and not functional units. Thus, in the Danish population, the U-hg is composed of a range of subhgs, e.g. U5a, U5b, U6, U7, and U8, with widely differing nuclear ancestries, reflecting migrations rather than selection [20]. It is thus meaningless to ascribe a specific functional effect to a particular mtDNA hg-without having carefully examined both mtDNA and nuclear genetic variation and corrected for stratifications in both. Previous conflicting studies of disease associations with mtDNA have been suggested to be the result of insufficient power [48], insufficient stratification respect to sex, age, geographical background [49] or population admixture [50], or the use of small areas of recruitment risking "occult" founder effects [51]. The fact that careful control, as here, of these factors and the bigenomic dependence, results in none of eight previously SZ associated mtDNA SNPs being associated with SZ in the very large Danish iPSYCH cohort, suggests that previously reported associations could indeed be spurious findings due to cryptic population stratification. Metaanalyses pooling studies from different populations [15] does not necessarily solve this problem-it may aggravate it by introducing further sub-stratification of the total population analyzed. The extensive bi-genomic dependence demonstrated in the Danish population makes this phenomenon the most parsimonious explanation of non-replicable associations with mtDNA variants, not only associations with SZ, but obviously bi-genomic dependence can interfere with associations between mtDNA and all types of diseases and traits. Ethics statement The iPSYCH cohort study (www.ipsych.au.dk) is register-based using data from Danish national health registries. The study was approved by the Scientific Ethics Committees of the Central Denmark Region (www.komite.rm.dk) (J.nr.: 1-10-72-287-12) and executed according to guidelines from the Danish Data Protection Agency (www.datatilsynet.dk) (J.nr.: 2012-41-0110). Passive, but not informed, consent was obtained, in accordance with Danish Law nr. 593 of June 14, 2011, para 10, on the scientific ethics administration of projects within health research. SZ patients and controls As part of the iPSYCH recruitment protocol, 23,743 controls, born between May 1 st 1981 and Dec 31 st 2005 were selected at random from the Danish Central Person Registry. Among persons born within the same time span 2,538 persons assigned an ICD-10 F20 were identified in the Danish National Patient Registry. All were singletons, were alive one year after their birth, and had a mother registered in the Danish Central Person Registry. DNA was available from DBS cards obtained from the Danish Neonatal Screening Biobank at Statens Serum Institut [52] Demographics of patients and controls are given in S1 Table. Genetic analysis and mtDNA SNPing From each DBS card two 3.2-mm disks were excised from which DNA extracted using Extract-N-Amp Blood PCR Kit (Sigma-Aldrich, St Louis, MO, USA)(extraction volume: 200 μL). The extracted DNA samples were whole genome amplified (WGA) in triplicate using the REPLIg kit (Qiagen, Hilden, Germany), then pooled into a single aliquot. Finally, WGA DNA concentrations were estimated using the Quant-IT Picogreen dsDNA kit (Invitrogen, Carlsbad, CA, USA). The amplified samples were genotyped at the Broad Institute (MA, USA) using the PsychChip (Illumina, CA, USA) typing 588,454 variants, developed by the Psychiatric Genetic Consortia. We then isolated the 418 mitochondrial loci and reviewed the genotype calls, before exporting into the PED/MAP format using GenomeStudio (Illumina, CA, USA). Haplo-grouping of mtDNA was performed using the defining SNPs reported in www. phylotree.org [53]. Nuclear ancestry Nuclear ancestry estimation was done using ADMIXTURE 1.3.050 in the supervised approach. Briefly, reference populations consisting of Human Genome Diversity Project (HGDP) (http://www.hagsc.org/hgdp/), a Danish (716 individuals) and a Greenlandian (592 individuals) genotyping SNP data set were used. The final reference data set consisted of 103,268 autosomal SNPs and 2,248 individuals assigned to one of nine population groups: Africa, America, Central South Asia, Denmark, East Asia, non-Danish Europe, Greenland, Middle East and Oceania. The number of clusters, K was set to eight, based on principal component analysis clustering (data not shown). The subpopulations were merged with the reference population data set and analyzed using ADMIXTURE. For prediction of the ancestry of individuals within the mtDNA hgs we created a random forest model [54] based on the reference data set, with the clusters Q1-8 as predictors and population groups as outcome. Thus, the ancestry analysis of the individual person was the result of a supervised prediction. Prediction was done using R3 version 3.2.2, using the Caret package. Statistics The statistical significance of differences in mtDNA SNP proportions between controls and SZ patients was assessed using a permutation version of Fisher's exact test. Samples with missing sequence data were excluded. To assess differences in allele distribution within the predicted ancestries we used Pearson's Chi-squared test. Ancestries with allele counts below six were not included. Calculations were performed using R (version 3.1.3). Principal component analysis (PCA), was prepared using PLINK (v.1.90b3.31). For the PCA the reference population variants were extracted from the iPSYCH control sample, LD pruned (indep-pairwise 50 5 0.5) and allowing only SNPs with 99% genotyping rate. Prior to PCA of mtDNA data, samples were loaded into GenomeStudio (version 2011.a), a custom cluster was created using Gentrain (version 2), following automatic clustering, all positions with heterozygotes were manually curated. The data was exported relative to the forward strand using PLINK Input Report Plugin (version 2.1.3). Eigenvectors were calculated using PLINK (v1.90b3.31). PCA plots were created using the package ggplot2 (version 1.0.1) in R (version 3.1.3).
v3-fos-license
2020-06-04T09:06:43.232Z
2020-06-01T00:00:00.000
219331378
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/25/11/2594/pdf", "pdf_hash": "57a15cac1ad603e3b5c2623d3f4245b6dc67626c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1351", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "sha1": "7d5860d2516bbd8dddfdbf04f8503f66b6d8c053", "year": 2020 }
pes2o/s2orc
Mycetoma and Chromoblastomycosis: Perspective for Diagnosis Improvement Using Biomarkers Background: Mycetoma and chromoblastomycosis are both chronic subcutaneous infectious diseases that pose an obstacle to socioeconomic development. Besides the therapeutic issue, the diagnosis of most neglected tropical diseases (NTD) is challenging. Confirmation using direct microscopy and culture, recognized as WHO essential diagnostic tests, are limited to specialized facilities. In this context, there is a need for simple user-friendly diagnostic tests to be used in endemic villages. Methods: This review discuss the available biomarkers that could help to improve the diagnostic capacity for mycetoma and chromoblastomycosis in a theoretical and practical perspective. Results: A lack of research in this area has to be deplored, mainly for mycetoma. Biomarkers based on the immune response (pattern of leucocytes, antibody detection), the dermal involvement (extracellular matrix monitoring, protein expression), and the presence of the infectious agent (protein detection) are potential candidates for the detection or follow-up of infection. Conclusion: Confirmatory diagnosis based on specific diagnostic biomarkers will be the basis for the optimal treatment of mycetoma and chromoblastomycosis. It will be part of the global management of NTDs under the umbrella of stewardship activities. Introduction Mycetoma and chromoblastomycosis are both chronic subcutaneous infectious diseases that pose a devastating obstacle to public health, poverty reduction, and socioeconomic development. For these reasons, they were formally recognized by the World Health Organization (WHO) as neglected tropical diseases (NTD) in 2016. Mycetoma is caused by different species of fungi (eumycetoma) or aerobic filamentous bacteria (actinomycetoma) [1], whereas chromoblastomycosis is caused only by fungi. The causative organisms of eumycetoma including Madurella mycetomatis, Trematosphaeria grisea, and Scedosporium apiospermum, are distributed worldwide, but the main endemic areas, known as the Mycetoma Belt, include the Bolivarian Republic of Venezuela, Chad, Ethiopia, India, Mauritania, Mexico, Senegal, Somalia, Sudan, and Yemen [2]. The clinical characteristics of mycetoma include local swelling, multiple sinuses, and discharge which contains the infective forms, known as "grains". Mycetoma usually affects the foot, leading to substantial disability in advanced cases due to bone destruction by the infectious agent, and in some cases, may be fatal following secondary bacterial septicemia. Indeed, many patients are late presenters with advanced infection, given the painlessness of the disease and the scarcity of medical and health facilities in endemic areas, for whom amputation may be the only available treatment. The causative fungi of chromoblastomycosis, including Fonsecaea pedrosoi and Cladophialophora carrionii, are distributed worldwide, but the highest prevalence of the disease is found in the Amazon region of Brazil, the northern part of Venezuela, Costa Rica, the Dominican Republic, and in Madagascar [2]. Clinical presentations of chromoblastomycosis are polymorphic, the most frequent being nodular, verrucous, and tumoral-like. Mycetoma and chromoblastomycosis are treated using intensified disease management, including anti-infective agents (antibiotic or antifungal) and/or surgical treatment based on the needs and clinical expression of disease in each individual patient [3]. However, most of the current medicines have limited effectiveness, many side effects, and are not available in endemic countries because of their high costs. Besides the therapeutic issue, the diagnosis of NTDs is challenging. It was reported in a recent expert consensus report that in basic healthcare settings, direct microscopy combined with clinical signs were the most useful diagnostic indicators to prompt referral for treatment [4]. Moreover, microscopy and culture are now recognized as WHO essential diagnostic tests. However, in endemic countries with limited health access, the initial visual recognition from the clinical examination of a suspected case may be unavailable. Confirmation using direct microscopy and culture are limited to specialized facilities. In this context, there is a need of simple user-friendly diagnostic tests to use in mycetoma-and chromoblastomycosis-endemic villages. The aim of this review is to discuss the available biomarkers that could help to improve the diagnostic capacity for mycetoma and chromoblastomycosis and to discuss innovative diagnostic solutions in a theoretical and practical perspective. Mycological Diagnosis of Mycetoma and Chromoblastomycosis The diagnosis of mycetoma and chromoblastomycosis requires laboratory confirmation by direct examination and/or histopathology. Mycetoma direct examination is based on the morphological and physiological characteristics of the grains discharged from sinuses, whereas the observation of muriform cells in clinical specimens is compulsory for the diagnosis of chromoblastomycosis. Although direct examination is helpful in detecting the diseases, it is important to culture the causative organism properly. Indeed, some species may be resistant to antifungals. In addition, identification may contribute to data on the epidemiology and biodiversity of the etiological agents worldwide. The common methods used for the identification of pathogenic fungi isolated from culture are based on the microscopic examination of morphological characteristics allowing identification to the genus level. However, these methods are time-consuming, require expertise in microscopy, and have low specificity. Further identification to the species level requires molecular approaches based on PCR amplification and the sequencing of conserved genomic regions. Molecular tools are especially needed for the implementation of treatment and/or disease surveillance. For instance, the agents of the disease may be morphologically indistinguishable from their environmental counterparts, or cryptic species such as F. pedrosoi, F. monophora, and F. nubica, may coexist, but with different virulence and invasive potentials. As an example, new molecular tools to distinguish Fonsecaea involved in chromoblastomycosis have been developed based on padlock probes in rolling circle amplification; these tools allow the detection and species differentiation of Fonsecaea agents without sequencing [5]. Molecular approaches are based on PCR amplification and the sequencing of conserved genomic regions and have been shown to allow the resolution of the causative agents to the species level, but these approaches are onerous and relatively time-consuming. Besides molecular-based identification, matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry (MS) has been shown to provide a robust, cost-effective, and rapid identification at the species level of a variety of fungi from pure culture. As an example, the identification of the agents of black-grain mycetoma by MALDI-TOF-MS demonstrated the accurate identification of eumycetoma agents and related fungi [6]. Despite the recognition of microscopy and culture as WHO essential diagnostic tests, these tests are still limited to specialized facilities in endemic areas. The identification of fungi at the species level, while useful to optimize disease management, is also not available in most endemic areas. Pattern of Leukocytes and Cytokines Considering the inflammatory nature of the diseases, the pattern of leucocytes is one opportunity to detect the infection. This strategy was reported for both mycetoma and chromoblastomycosis. During mycetoma, suppurative granulomas (composed of neutrophils) surrounding characteristic grains were described in the subcutaneous tissue [7]. The neutrophilic infiltrate is surrounded by histiocytes and a mixed inflammatory infiltrate, including lymphocytes, plasma cells, eosinophils, and macrophages. Skin lesions of actinomycetoma (ACM) and eumycetoma (EUM) were studied to compare cell elements in the inflammatory infiltrate [8]. In both groups of mycetoma, CD4 and CD8 T lymphocytes were identified surrounding the neutrophil aggregates with macrophages, whereas B lymphocytes were not identified. Interestingly, a higher number of CD8+/lymphocytes (p = 0.02) and macrophages (p = 0.01) were observed in ACM lesions compared to EUM lesions [8]. The pattern of leukocytes may be a valuable approach to distinguish ACM and EUM that would benefit from different anti-infective treatments. The histology of chromoblastomycosis is characterized by a foreign body organized granuloma with isolated areas of microabscess formation mainly composed of giant cells and groups of fungal cells. It was reported that CD4+ and CD8+ T lymphocyte populations, B lymphocytes, neutrophils, and macrophages play a significant role in the cell-mediated response during chromoblastomycosis [9]. Populations of macrophages, lymphocytes, neutrophils, and Langerhans cells and their correlation with the expression of macrophage inflammatory protein-1α (MIP-1α), chemokine receptors (CXCR3, CCR1), and enzymes (superoxide dismutase, SOD, and nitric oxide synthase, iNOS) were indeed studied in order to better characterize the cell-mediated immune reactivity of chromoblastomycosis [10]. Biopsies of patients with a clinical and histopathological diagnosis of chromoblastomycosis were studied using immunohistochemistry before the beginning of treatment. Fungi numbers were correlated with CD3, CD45RO, and iNOS positive cells. Furthermore, MIP-1α expression was associated with CD45RO, CD68, iNOS, and CXCR3. The authors suggested the possible role of MIP-1α in the fungi persistence during chromoblastomycosis and the regulatory role of macrophage activation in determining the outcome and fungal destruction in chromoblastomycosis infections. These results suggested the potential use of MIP-1α as a prognostic biomarker. In another study [11], it was demonstrated that monocytes from patients with different clinical forms of chromoblastomycosis had distinct phenotypic and functional profiles. A higher production of IL-10 and a lower expression of HLA-DR and costimulatory molecules was indeed produced by monocytes of patients presenting a more severe form of the disease. This was confirmed by Mazo Fávero Gimenes et al., who demonstrated a predominant production of IL-10 with the inhibition of IFN-Y in patients with more severe clinical forms, resulting in the low-level induction of T cells compared to patients with mild chromoblastomycosis; mild forms of CBM favor a Th1 profile that may inhibit disease development, while moderate forms trigger an intermediate response between Th1 and Th2 [12]. The monocyte subsets, as well as the Th1/Th2 profile, may be valuable in identifying patients with severe forms of chromoblastomycosis and may be used as a prognostic biomarker. This approach, based on the pattern of leukocyte subpopulations, is not specific enough to be used as a diagnostic tool. As stated by the authors, it may be used at the time of diagnosis to distinguish the two mycetoma entities or as a prognostic tool during chromoblastomycosis. Antibodies Antibody production may be observed during mycetoma or chromoblastomycosis as a consequence of the host response to the presence of an infectious agent. Antibodies can be directed towards the infectious agent itself or toward immune cells of the host in relation with the infection. This was observed during chromoblastomycosis, leading to the secretion of anti-Fonsecaea pedrosoi IgG, as well as anti-neutrophil cytoplasmic antibodies (ANCAs). Anti-F. pedrosoi IgG In patients with severe chromoblastomycosis, the level of anti-F. pedrosoi IgG (IgG1, IgG2, and IgG3) was higher compared to patients with moderate or mild disease (p < 0.05) [13]. After treatment, the mean antibody titers of IgG, IgG1, and IgG2 were reduced (p < 0.05) [14]. Furthermore, a reduction in IgG3 and IgG titers was observed in patients with a rapid response (p < 0.05) and a reduction in IgG2 in patients with rapid and intermediate responses (p < 0.05) [13]. Interestingly, the immunological analysis showed that the antibody anti-F. pedrosoi did not provide protection against infection [13]. The detection of anti-F. pedrosoi IgG may provide a specific diagnostic biomarker of chromoblastomycosis caused by F. pedrosoi. Anti-Neutrophil Cytoplasmic Antibodies (ANCAs) Patients suffering chromoblastomycosis were tested for the presence of ANCAs [14]. Among them, 20% had detectable ANCAs. This study demonstrates that chromoblastomycosis triggers autoreactivity against myeloid lysosomal antigens. ANCAs may have a place in the diagnosis of chromoblastomycosis, whereas this biomarker will not be specific to the infection. Monitoring of the Extracellular Matrix During chromoblastomycosis, there is a marked pseudoepitheliomatous hyperplasia of the epidermis, and in some areas, the apparent transepidermal elimination of fungal cells, which can be found in the stratum corneum [15]. The subcutaneous pathology of chromoblastomycosis led to the study of the extracellular matrix during this infection, looking at potential metabolites that could help the diagnosis. The collagen content and the turn-over of the extracellular matrix, based on serum and urinary metabolites (pyridinoline and pentosidine), were then monitored in patients with a diagnosis of chromoblastomycosis [16]. The serum level of type III collagen was correlated with the lesion size. In patients whose lesion size reduced by more than 50% during terbinafine treatment, urinary pyridinoline was higher compared to patients whose lesion size did not significantly reduce. It was demonstrated that pyridinoline and pentosidine cross-links increased in the lesions during treatment, whereas a significant reduction in collagen content was observed [16]. The monitoring of collagen content and cross-linking in chromoblastomycosis patients could be used as a biomarker for diagnosis and treatment follow-up. Whereas the monitoring of extracellular matrix was not studied during mycetoma, it would be worth evaluating this aspect of the disease, considering that mycetoma is clinically characterized by a subcutaneous mass. In any case, this biomarker will not be specific to the disease, but more useful as a biomarker of treatment efficacy. Protein Expression The presence of an infectious agent may lead to protein secretion in relation with the infectious agent itself or as a consequence of its presence in the host, leading to an epidermal proliferation. Translationally Controlled Tumor Protein In the case of mycetoma, a protein homologous to the translationally controlled tumor protein (TCTP) was demonstrated to be present in M. mycetomatis [17]. Indeed, TCTP was secreted into the culture medium and was expressed on hyphae present in the black grains of eumycetoma. Moreover, significant IgG and IgM immune responses against TCTP were demonstrated. Interestingly, the antibodies' levels correlated with lesion size and disease duration, as demonstrated by the highest levels of antibodies after a disease duration of 6-15 years. TCTP is the first well-characterized immunogenic antigenic for the fungus M. mycetomatis. The authors concluded that TCTP is the first monomolecular vaccine candidate [17]. Another perspective could be the use of TCTP as a diagnostic and prognostic biomarker, and this will be specific to mycetoma, as TCTP was proven to be secreted by the infectious agent. Galectin-3 Expression During chromoblastomycosis, known as a benign epidermal proliferation, it was suggested that galectin-3 may be an interesting biomarker. Galectin-3 is indeed expressed in basal cell carcinoma and squamous cell carcinoma, in relation with tumor genesis, progression, and metastasis. Galectin-3 expression was studied by immunohistochemistry on skin sections of patients suffering chromoblastomycosis [18]. A significant downregulation of galectin-3, both in selected benign skin diseases and in skin cancers, was demonstrated. This result indicates that a regulatory pathway of galectin-3 expression during epidermal hyperplasia occurs independently of the differentiation status of keratinocytes [18]. Then, galectin-3 expression may be used as a diagnostic biomarker of chromoblastomycosis. This biomarker should be studied during mycetoma, considering that mycetoma is also an epidermal hyperplasia, whereas this will not be specific to this infectious disease. Conclusions Considering that mycetoma and chromoblastomycosis are inflammatory subcutaneous infectious diseases, biomarkers based on the immune response (pattern of leucocytes, antibody detection), the dermal involvement (extracellular matrix monitoring, protein expression), and the presence of the infectious agent (protein detection) are potential candidates for the detection or follow-up of infection. A lack of research in this area has to be deplored mainly for mycetoma, despite the urgent need for such biomarkers. Advances in identifying specific diagnostic biomarkers of mycetoma and chromoblastomycosis may indeed pave the way for new laboratory-based or point-of-care tests. Most of the biomarkers described in this review are indeed not specific or applicable as prognostic biomarkers, which would be required subsequently. Confirmatory diagnosis will be the basis for the optimal treatment of mycetoma and chromoblastomycosis. It will be part of the global management of NTDs under the umbrella of stewardship activities.
v3-fos-license
2023-08-09T15:15:53.935Z
2023-08-07T00:00:00.000
260723878
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2023.1206120/pdf", "pdf_hash": "dcb167b9efce531c9608e0aa62477c36afedf415", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1354", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4cfef0a7a65500442843291ed7d19f01957d377d", "year": 2023 }
pes2o/s2orc
Characterization of High-Gamma Activity in Electrocorticographic Signals Introduction Electrocorticographic (ECoG) high-gamma activity (HGA) is a widely recognized and robust neural correlate of cognition and behavior. However, fundamental signal properties of HGA, such as the high-gamma frequency band or temporal dynamics of HGA, have never been systematically characterized. As a result, HGA estimators are often poorly adjusted, such that they miss valuable physiological information. Methods To address these issues, we conducted a thorough qualitative and quantitative characterization of HGA in ECoG signals. Our study is based on ECoG signals recorded from 18 epilepsy patients while performing motor control, listening, and visual perception tasks. In this study, we first categorize HGA into HGA types based on the cognitive/behavioral task. For each HGA type, we then systematically quantify three fundamental signal properties of HGA: the high-gamma frequency band, the HGA bandwidth, and the temporal dynamics of HGA. Results The high-gamma frequency band strongly varies across subjects and across cognitive/behavioral tasks. In addition, HGA time courses have lowpass character, with transients limited to 10 Hz. The task-related rise time and duration of these HGA time courses depend on the individual subject and cognitive/behavioral task. Task-related HGA amplitudes are comparable across the investigated tasks. Discussion This study is of high practical relevance because it provides a systematic basis for optimizing experiment design, ECoG acquisition and processing, and HGA estimation. Our results reveal previously unknown characteristics of HGA, the physiological principles of which need to be investigated in further studies. . Introduction The human brain's electrophysiology has been studied extensively over the past decades, dating back to the first electroencephalographic (EEG) recordings performed by Berger (1929). Since then, brain signals have been categorized into distinct frequency bands (e.g., delta, theta, alpha, beta, gamma, and high-gamma). Signals in these bands are commonly associated with various cortical processes that reflect different states of mind (Pfurtscheller and Lopes da Silva, 1999;Pfurtscheller, 2001). While non-invasive EEG can record brain . HGA tracks cognitive and behavioral task engagement with high spatiotemporal fidelity and exhibits outstanding consistency over task repetitions . These qualities make HGA highly suitable for invasive brain-computer interface (BCI) applications, such as motor rehabilitation that provide prosthetic limb control and movement restoration (Leuthardt et al., 2004;Shenoy et al., 2008;Kubanek et al., 2009;Yanagisawa et al., 2011;Pistohl et al., 2012;Jiang et al., 2017;Li et al., 2017;Pan et al., 2018;Gruenwald et al., 2019;Thomas et al., 2019), speech prostheses that synthesize speech directly from the cortex (Leuthardt et al., 2004;Pei et al., 2011a;Herff et al., 2015), and decoding of visual perception (Rupp et al., 2017;Kapeller et al., 2018a,b). For all these applications to perform well, HGA extracted from ECoG must match the true physiological activity as closely as possible. This requires isolating the true physiological activity generated by the cognitive or behavioral task of interest from other physiological activity and the noise introduced by the HGA estimator. Common performance metrics for this context are Pearson's correlation coefficient, signal-to-noise ratio (SNR), the mean squared error (MSE), mutual information, etc. There exist a variety of qualitative and quantitative characteristics of HGA. One such qualitative characteristic is that different cognitive and behavioral tasks can produce different types of HGA. For example, a motor control task may produce a smooth HGA type in sensorimotor cortex with relatively slow transients, whereas a receptive or expressive language task may produce a burst HGA type in Broca's or Wernicke's area with relatively fast transients. Figure 1G illustrates this conceptual relationship between HGA types and cognitive and behavioral tasks. Figures 1H-J further illustrate three quantitative characteristics of HGA, which we also refer to as fundamental signal properties hereafter: First, the high-gamma frequency band is the set of adjacent spectral components subject to physiological task-related power modulation. The high-gamma frequency is typically defined by a lower and an upper cutoff frequency, e.g., 60-300 Hz. Second, the HGA bandwidth refers to the highest frequency component present in the HGA time course (e.g., 20 Hz). Third, the temporal dynamics of HGA describe the shape of task-related HGA time courses, e.g., in terms of rise time, duration, and amplitude. These three fundamental signal properties are likely to be different for each HGA type. We refer to the identification and assessment of qualitative and quantitative characteristics of HGA as HGA characterization hereafter. Despite its extensive use in various application contexts, HGA has never been systematically characterized. However, such characterization is essential for various practical reasons. First, knowledge of the high-gamma frequency band is required to adjust fundamental recording and processing parameters (e.g., sampling rate of the biosignal amplifier; frequency band of the HGA estimator). Furthermore, knowledge of the HGA bandwidth is required to adjust the HGA estimator's feature rate (i.e., the number of HGA estimates computed per second) according to the sampling theorem. Finally, knowledge of the temporal dynamics of HGA is essential for experimental protocol design (e.g., with appropriate task duration) and for adjusting processing algorithms (e.g., with appropriate size and location of a BCI classifier window). In this paper, we address the issues described above and present a systematic characterization of HGA in ECoG signals. This characterization is based on ECoG signals recorded from 18 epilepsy patients with temporarily implanted ECoG electrodes while they perform motor, listening, and visual perception tasks. In this study, we first categorize HGA into HGA types associated Frontiers in Neuroscience frontiersin.org . /fnins. . with cognitive/behavioral tasks. For each HGA type, we then systematically quantify the three fundamental signal properties of HGA identified above: (1) the high-gamma frequency band, (2) the HGA bandwidth, and (3) the temporal dynamics of HGA. In a final step, we summarize and discuss our results, focusing on their relevance to HGA estimation. . Materials and methods . . Subjects We evaluated ECoG signals recorded from 18 patients (S01-S18) with intractable epilepsy who underwent clinically indicated localization and subsequent resection of their seizure onset zone. For this purpose, the patients were implanted with subdural electrode grids and strips over their left and/or right hemispheres. The grids remained implanted for a duration of up to two weeks and were used for ECoG-based functional mapping to assist in surgical planning. S01-S11 were patients at Albany Medical College (Albany, New York), and S12-S18 were patients at Asahikawa Medical University (Asahikawa, Japan). All subjects in this study voluntarily participated in the research experiments, and written informed consent was obtained from each patient before participating in the study. The study was approved by the Institutional Review Boards of both Albany Medical College and Asahikawa Medical University. Table 1 summarizes subject demographics, electrode coverages, and performed experimental protocols. The individual electrode coverages for subjects S01-S18 are provided in Supplementary material (Section 1). . . Cognitive and behavioral tasks The subjects in this study performed three cognitive and behavioral tasks (see Table 1). These tasks were executed repeatedly and interleaved by a resting-state baseline interval. We refer to each of these repetitive executions as trials. To avoid subject fatigue, we split the experiments into several runs of manageable duration, where the subject performed a fixed number of trials (e.g., 20) without interruption. . . . Motor control task In this task, the subjects were visually cued to use their hand contra-lateral to the ECoG implant to perform a series of gestures from the well-known rock-paper-scissors hand game. We first verified that all subjects were able to perform the three gestures from this game. A screen placed approximately one meter in front of the subject visually cued the subjects to perform the gestures. For each trial in this experiment, the subject performed one gesture. A pictogram of one of the three different gestures was randomly shown for a duration of one second. Each cue was followed by a scrambled picture that served as a 1.5-2.5 s baseline interval. The subjects were instructed to form and hold the requested hand gesture on presentation of the corresponding cue, and to return to a relaxed position on presentation of the scrambled picture. One experimental run consisted of 20-30 trials per gesture (i.e., 60-90 in total). The sequence of gestures was randomized. In total, we collected 1-4 runs comprising a total of 60-240 trials per subject. . . . Listening task In this task, the subjects listened to four short narratives presented in their native language through loudspeakers placed in front of them. Before the narrative started, we recorded a baseline period where the subject was at rest and not exposed to any auditory input. To suppress environmental noise, we kept the room noise and distraction-free throughout the experiment. Each baseline interval and each narrative lasted for 10 s. . . . Visual perception task In this task, we presented the subjects with a battery of visual stimuli using a screen placed ≈1 m in front of them. At this distance, the stimuli spanned ≈12 • (horizontally and vertically) of the visual field. Subjects were asked to keep fixated on the center of the screen. The visual battery comprised seven different categories (body parts, faces, digits, Hiragana words, Kanji words, line drawings, and simple objects), presented in a random sequence and shown in color or monochrome. Each visual stimulus appeared . /fnins. . for 200 ms on the screen, followed by a black screen for a duration of 800 ms. Further details are provided in the original research paper (Kapeller et al., 2018b). One experimental run consisted of 40 trials per stimulus category (280 in total). We performed 1-2 runs for each subject. . . Signal acquisition and preprocessing We recorded ECoG signals sampled at 1.2 or 2.4 kHz using a g.HIamp biosignal amplifier (g.tec medical engineering GmbH, Austria) and processed the data in MATLAB (The Mathworks, Inc., Massachusetts, USA) using the g.HIsys High-Speed Online Processing for Simulink toolbox and g.BSanalyze (both g.tec medical engineering GmbH), or the general-purpose BCI2000 software platform Schalk and Mellinger, 2010). In total, we recorded signals from 2116 electrodes. We visually inspected these signals and discarded 148 electrodes affected by excessive noise or pathologic activity like epileptic discharges. From the remaining 1968 electrodes, we further narrowed down our selection to cortical areas known to be involved the corresponding cognitive or behavioral task. For example, we selected electrodes from sensorimotor areas for the motor control task. This finally yielded 771 electrodes across subjects S01-S18 selected for further processing (see Table 1). To improve the signal quality, we applied a common average reference followed by notch filters at the line frequency and its harmonics (i.e., up to half the sampling frequency; Butterworth of order 6; cutoff frequencies at ±2.5 Hz at the respective center frequency). We further used a high-pass filter to remove lowfrequency drifts from our recordings (first-order Butterworth; cutoff frequency at 5 Hz). These steps yielded our preprocessed ECoG signals. . . HGA estimation Figure 2 shows the HGA estimation pipeline used in this study, which is based on log band power extraction in time domain. This pipeline receives the preprocessed ECoG signals (see Section 2.3) as input. First, a time-domain spectral whitening filter (inverse autoregressive filter of order 10; see Gruenwald et al., 2019 for details) is applied. This step balances the power-law ECoG spectrum of the input signal so that all frequency components equally contribute to the subsequently computed band power. Second, a bandpass filter is applied (Butterworth of order 10), which removes all signal components outside the specified lower and upper cutoff frequencies. Third, the signal power is extracted as the mean squares over consecutive, non-overlapping windows of 10 ms length. This step produces HGA estimates at a rate of 100 Hz. Fourth, a log transform is applied, which (1) converts the . /fnins. . asymmetric (e.g., χ 2 ) distribution of the HGA estimation noise to a more Gaussian distribution and (2) decouples the variance of the estimation noise from the signal mean, leading to favorable stationary conditions (Bartlett and Kendall, 1946). Finally, an optional Butterworth lowpass filter (order 6; cutoff frequency 10 Hz) is applied to denoise the HGA estimates. Some parameters of this pipeline (e.g., bandpass cutoff frequencies) change during our analyses. We provide concrete values when they are available. . . HGA characterization This section describes our HGA characterization analyses. In a first qualitative step, we identified individual HGA types (Section 2.5.1). Based on the identified HGA types, Sections 2.5.2-2.5.4 present our quantitative characterization of the fundamental signal properties of HGA, i.e., the high-gamma frequency band, the HGA bandwidth, and the temporal dynamics of HGA. . . . Identifying HGA types According to our experience and HGA reported in the literature, we identified three common HGA types associated with cognitive and behavioral tasks: (1) Smooth HGA is characterized by a smooth activation pattern and relatively slow transients. This type of HGA can be found within sensorimotor cortex in motor control experiments. (2) Burst HGA is characterized by burst activation and fast to intermediate transients. This temporal activation pattern can be found within Broca's area, Wernicke's area, and the auditory cortex during receptive or expressive language tasks. (3) Pulsed HGA exhibits short pulses with fast transients and is produced by the visual cortex and the fusiform gyrus in response to visual stimuli. Note that the cognitive and behavioral tasks considered in this study (i.e., motor control, listening, and visual perception; see Section 2.2) correspond to these HGA types. . . . High-gamma frequency band Determining the high-gamma frequency band requires finding a pair of lower and upper cutoff frequencies within which the proportion of physiological, task-related power modulation in HGA estimates reaches a maximum. To solve this maximization problem, we performed a grid search across lower and upper cutoff frequencies, where we used z-scores as the output metric. Our grid search comprised 15 logarithmically spaced values between 30 and 100 Hz for the lower cutoff frequency and 10 logarithmically spaced values between 110 and 500 Hz for the upper cutoff frequency, where we excluded all pairs of lower and upper cutoff frequencies yielding a bandwidth of <30 Hz (e.g., 100-110 Hz). For each of the remaining pairs, we first computed the HGA estimates using the pipeline described in Section 2.4 (without the denoising filter to preserve a maximum of statistical independence). Second, we epoched the HGA estimates into trials, based on task onsets stored in each recording file alongside the ECoG signals. The task-specific duration of these trials encompassed a pre-onset resting-state interval and a postonset task activity interval. For the motor control, language, and visual perception task, we set the pre-onset interval to 0.75, 10, and 0.25 s, and the post-onset interval to 1.5, 10, and 0.5 s, respectively. Third, we offset-corrected each trial by subtracting the mean HGA during the pre-onset interval. Fourth, we computed one z-score defined as the mean HGA increase µ from the pre-onset interval to the post-onset interval (averaged across trials), normalized by the standard deviation σ pre of all samples from all trials within the pre-onset interval: This procedure yielded a 15×10 (lower cutoff × upper cutoff frequency) heatmap of z-scores for all electrode channels, subjects, and tasks. In a fifth step, we then combined the electrode channels as a weighted average into one subjectspecific z-score heatmap per task, where the weights corresponded to the maximum z-score of the respective electrode channel. . . . HGA bandwidth The HGA bandwidth refers to the highest frequency component present in the HGA time courses. To compute these HGA time courses, we used the HGA estimation pipeline shown in Figure 2 (without denoising filter), where we adjusted the cutoff frequencies of the bandpass filter for each cognitive and behavioral task individually, based on the previously obtained results from the high-gamma frequency band characterization step (see Section 2.5.2 and Figure 4). Specifically, we used 70-300, 50-140, and 50-200 Hz for the motor control task, listening task, and visual perception task, respectively. We then employed the recently published SNR decomposition method to extract the HGA bandwidth from these HGA estimates (Gruenwald et al., 2021). The SNR decomposition method allows the unsupervised quantification of underlying physiological activity in noisy HGA estimates. Here, the term unsupervised means that the SNR decomposition method does not require any task-related information about the cognitive or behavioral task, which makes this method universally applicable to ECoG signals. . /fnins. . To better understand this background component, we investigated the numerical estimation noise floor. For this purpose, we used a simple synthesis technique: first, we computed an autoregressive model (order 20) of the preprocessed ECoG signals recorded from each electrode for a given data set. Second, we generated random noise with the same length and dimensionality as the recorded ECoG and used the autoregressive models to produce signals with exactly the same spectral characteristics as the recorded ECoG but without HGA. Third, we applied the HGA estimator (see Figure 2) to this synthesized data set. Fourth, we calculated the PSD of the estimator output, which yielded the desired numerical noise floor. Figure 3 shows this numerical noise floor, which decreases linearly toward higher frequencies. This is because HGA estimation noise is slightly serially correlated since HGA estimates are calculated from a bandpass signal that is also serially correlated. Figure 3 also shows that the numerical noise floor dominates the background component, as the gap between the PSD of the HGA estimates and the numerical noise floor almost vanishes above 5 Hz. To obtain the background component based on all these observations, the SNR decomposition method fits a straight line into the linear regime of the original PSD, e.g., above 5 Hz. The signal component is then obtained by subtracting the background component from the original PSD in the linear domain. Given this decomposition, the HGA bandwidth then corresponds to the frequency where the signal PSD falls below a certain threshold relative to the background component. We chose −3 dB (half noise power) as the threshold, which is a common value in experimental signal power analysis. To keep our analysis tractable, we extracted one HGA bandwidth specific to each subject and task. For this purpose, we first averaged the original PSDs over all channels for each subject and task. To make our approach more robust, we then smoothed the resulting subject-specific original PSDs via a symmetric moving-average filter of 20 samples corresponding to a frequency resolution of 0.025 Hz. Finally, we extracted the HGA bandwidth from these smoothed, subject-specific original PSDs as described above. . . . Temporal dynamics of HGA The third and last fundamental signal property is a set of parameters that describe the temporal dynamics of HGA. As temporal dynamics of HGA we consider the task-related (1) rise time, (2) duration, and (3) amplitude of HGA. Our method automatically identified onsets of task-related HGA and created trials based on them. We then extracted the temporal dynamics for each of these trials and generated a statistical representation across cognitive and behavioral tasks, following the procedure described below. (1) For each subject and task, we pre-selected five channels with strongest task-related HGA in a preliminary mapping analysis. (2) We computed HGA estimates for these pre-selected channels employing the pipeline shown in Figure 2, including the denoising filter and using the same taskspecific bandpass cutoff frequencies as in Section 2.5.3. (3) We offset-corrected the resulting HGA estimates by the mean value during resting-state. Standard approaches, e.g., based on the signal mean or median, were not appropriate here because these approaches are positively biased due to task-related HGA present in the signal. Instead, we implemented a more robust concept based on simple histogram analysis. We observed that the histogram of lowpass-filtered HGA estimates is composed of two components: first, a dominant stationary Gaussian component representing the estimation noise at the baseline level, and second, a non-stationary task-related component manifested by a pronounced right tail. Based on this composition, we determined the baseline level as the histogram peak location, i.e., the mean of the dominant stationary Gaussian component. This histogram peak location is not shifted by the right tail of the histogram, which makes this approach robust against a taskrelated bias. We offset-corrected all electrode channels by the respective resting-state level. Supplementary material (Section 2) illustrates this offset correction step. In the following notation, we omit any reference to electrode channels, subjects, tasks, or trials for convenience and conciseness. where N s = T s /T = 16 with T s = 0.16 s as a robust HGA rise time average and the HGA estimation interval T = 0.01 s. (5) We detected the onset of task-related HGA whenever d[n] > 0.25 (threshold empirically determined) for at least N s samples. Then, we epoched s[n] into trials based on the detected onsets (pre-and post-onset duration: 3.0 s and 5.5 s, respectively). (6) For the motor control and listening task, we removed trials where no task or stimulus was present. We omitted this step for the visual perception task due to its high pace that made it difficult to differentiate between resting-state and activation periods. (7) We processed each of the remaining trials as follows: (7a) We determined the peak location n pk and amplitude s pk = s[n pk ]. (7c) We located the beginning n 1 of the task-related HGA as the last zero-crossing of s[n] before the peak n pk . (7d) Likewise, we located the end n 2 of the task-related HGA where s[n] first fell below zero after n pk . (7e) Intuitively, we could compute the task-related rise time and duration directly from n 1 , n pk , and n 2 . However, this would yield inaccurate results because n 1 and n 2 were obtained via thresholding, which is prone to errors for noisy signals. To overcome this issue, we developed a robust approach to extract the rise time and duration based on the area under the curve (AUC). For this purpose, we express the AUC A of the complete trial as = pT(n 2 − n 1 )s pk (4) = pT d s pk . In Equation 4, we substituted the sum by p(n 2 − n 1 )s pk , where 0 < p < 1 indicates how much of the bounding rectangle T(n 2 − n 1 ) × s pk (width × height) is filled by A. In a next step, we recognized that T(n 2 − n 1 ) is equivalent to the duration, which we introduced as T d and substituted accordingly in Equation 5. In a last step, we rewrote Equation 5 to compute T d via. While A and s pk can be determined from Equation 3 and step (7a), respectively, p is unknown in general. Fortunately, p ≈ 0.5 is a robust approximation in practice. This approximation is justified by the fact that the AUC begins filling the bounding rectangle T d × s pk (width × height) from the lower left corner (s[n 1 ] = 0) to the top (s pk at n pk ) and back to the lower right corner (s[n 2 ] = 0). This corresponds to p = 0.5, i.e., an AUC that fills exactly 50% of the bounding rectangle. Consequently, we computed the duration via Equations 3 and 6, step (7a), and p = 0.5. (7f) We computed the rise time analogously to the previous step, where we substituted n 2 by n pk in Equation 3 to obtain the corresponding AUC. (8) We grouped the obtained temporal dynamic measures (i.e., rise time, duration, and amplitude) by task to create a statistical representation. . Results In the first qualitative HGA characterization step, we identified three HGA types as smooth, burst, and pulsed HGA. . . High-gamma frequency band Figure 4E shows the high-gamma frequency bands as shaded overlays. These shaded overlays indicate that all subjects exceed the specific threshold (e.g., 80%) relative to their maximum zscore (subject consensus). Consequently, all pairs of lower and upper cutoff frequencies within the area of the highest subject consensus can be regarded as the high-gamma frequency band. For example, 70-300 Hz (95% subject consensus), 50-140 Hz (80% subject consensus), and 50-200 Hz (95% subject consensus) are appropriate high-gamma frequency bands for the motor control, listening, and visual perception task, respectively. Figure 5 presents exemplary results of the high-gamma frequency band analysis for each task. To underline the impact of the high-gamma frequency band on HGA estimation, we show the analysis results for two high-gamma frequency bands: 50-140 Hz (red dots/traces) and 70-300 Hz (blue dots/traces). Figures 5A, B both show results for S15 to illustrate task-related variations of the high-gamma frequency band within the same subject. . . HGA bandwidth or behavioral task. The bottom row reports the HGA bandwidth of the individual subjects in each task. For the motor control task, the HGA bandwidth ranged from 3.4 to 6.3 Hz (4.9 Hz on average). For the language task, we obtained an HGA bandwidth from 3.2 to 6.8 Hz with 5.0 Hz on average. Finally, the HGA bandwidth in the visual perception task ranged from 4.3 to 6.5 Hz (5.8 Hz on average). All these results are also summarized in Figure 4F. . . Temporal dynamics of HGA Figure 7 shows exemplary time courses of detected HGA trials for each cognitive or behavioral task. In these time courses, we indicated the rise time and the decay (i.e., from peak to end of trial) in green and red shading, respectively. Combining the rise time and the decay in these plots yields the overall duration. Figure 4G summarizes the extracted temporal dynamics of HGA as trial histograms of the task-related rise time, duration, and amplitude extracted from real ECoG recordings. These histograms indicate each median and interquartile range (IQR), which we report as follows: For smooth, burst, and pulsed HGA, we obtained a respective median rise time of 114 (IQR: 83-157), 83 (66-118), and 90 (71-127) . . . High-gamma frequency band The high-gamma frequency band varies considerably across cognitive and behavioral tasks and between different subjects. Figure 4E shows that these variations across subjects can be moderate, such that a relatively wide range of upper and lower cutoff frequencies can be considered a subject-independent highgamma frequency band for a specific task (high subject consensus). For example, 70-300 Hz yields 95% subject consensus for motor control and 50-200 Hz yields 95% for visual perception. For the listening task, the high-gamma frequency band varies greatly across subjects, so that only a small range in the vicinity of ≈50-140 Hz yields a rather low subject consensus of 80%. To complicate things further, the high-gamma frequency band may even vary within the same subject depending on the cognitive or behavioral task. For example, S15 exhibited substantially different high-gamma frequency bands for the motor control and the visual perception tasks (see Figures 5A, B). To our knowledge, such systematic variations have not been reported before. Understanding and interpreting the neurophysiological principles governing these variations requires further experiments and analyses that are beyond the scope of this paper. From a practical perspective, however, high-gamma frequency band characterization has two important implications. First, the upper cutoff of the high-gamma frequency band determines the minimum required ECoG recording sampling rate via the Nyquist-Shannon sampling theorem. For example, cognitive/behavioral Frontiers in Neuroscience frontiersin.org . /fnins. . FIGURE High-gamma frequency band analysis. Exemplary results for the motor control task (A), the listening task (B), and the visual perception task (C). For each tasks, z-score heatmaps (left) and HGA time courses (right) of exemplary subjects and channels are shown. Two high-gamma frequency bands are illustrated: -Hz (red dots/traces) and -Hz (blue dots/traces). HGA time courses are presented as z-scores (averaged across all trials) with applied lowpass filter (Butterworth order , cuto frequency Hz, applied bidirectionally) to improve visualization. Translucent horizontal bars in the time course plots indicate the mean z-score during task activity, corresponding to the respective value in the z-score heatmap. Note that the noise of the HGA time courses depends on the number of averaged trials (see also Table ). tasks with an upper high-gamma frequency band cutoff frequency of 300 Hz (motor control, visual perception) require an ECoG recording sampling rate of at least 600 Hz. There is no point in using much higher sampling rates (e.g., 2.4 or 4.8 kHz), unless other phenomena at higher frequencies are of interest. The second practical implication is that variations in the high-gamma frequency band must be addressed by the HGA estimation procedure, which is also underlined by the amplitude variations of the HGA time courses in Figure 5. Specifically, it is essential to customize the lower and upper cutoff frequencies of the HGA estimator for each subject and task to achieve optimal performance. This optimum can be found using our strategy presented in Section 2.5.2, i.e., by maximizing z-scores in a grid search across lower and upper cutoff frequencies. When taskspecific information is not available in the data, our previously published SNR decomposition method can be employed for this maximization problem (Gruenwald et al., 2021). The SNR decomposition method allows quantifying (and thus maximizing) physiological, task-related HGA in ECoG signals without actual information about the experimental protocol. It is important to note that the optimal lower and upper cutoff frequencies of an HGA estimator strongly depend on whether spectral whitening is enabled. This is intuitive because spectral whitening changes the frequency spectrum of the ECoG signal and thus alters the task-related contribution of each spectral component to the overall HGA estimate. Consequently, such a change in the frequency spectrum leads to different optimal lower and upper cutoff frequencies. If spectral whitening is disabled, for example, higher frequency components contribute much less to the overall HGA estimate due to the 1/f power-law ECoG spectrum . As a consequence, the high-gamma frequency bands identified in this study are only directly applicable to HGA estimators with spectral whitening enabled. To overcome this limitation, we provide an analysis of the highgamma frequency band using an HGA estimator without spectral whitening (see Section 3 in Supplementary material). This analysis yielded a high-gamma frequency band of 90-500 Hz (90% subject consensus) for motor control, 60-500 Hz (80% subject consensus) for listening, and 80-500 Hz (90% subject consensus) for visual perception. . . HGA bandwidth The HGA bandwidth varies greatly across subjects, implying that the transients of the HGA time courses are faster in some subjects than in others. This interpretation is supported by the relatively wide rise time histograms in Figure 4G. At first glance, we were surprised that the HGA bandwidth appears to be well below 10 Hz in all cases. Determining the relationship between the HGA bandwidth and the corresponding rise times allowed us to verify the plausibility of our results. This relationship is based on the assumption that the rise time T r corresponds to the fastest possible ascent from minimum to maximum in a signal, which is approximately half the period T of the highest frequency component therein, i.e., T r ≈ T/2. This highest frequency component, in turn, approximately corresponds to the bandwidth B of the signal, so that T ≈ 1/B and consequently T r ≈ 1/2B. Substituting our experimentally determined HGA bandwidths into this equation, e.g., the total range of ≈3.2-6.5 Hz, yields . corresponding rise times of about ≈70-150 ms, which corresponds approximately to the range of the determined rise times shown in Figure 4G. Using the Nyquist-Shannon sampling theorem again, our HGA bandwidth characterization results can be formulated as an important rule of thumb for HGA estimation: Since HGA transients are band-limited by 10 Hz, HGA estimates computed at a rate of 20 Hz (cf. Nyquist rate) already cover all components of the underlying signal. While oversampling (i.e., using a multiple of the Nyquist rate for HGA estimation) may offer advantages for certain signal processing tasks and filters, higher HGA estimation rates do not capture additional information of the underlying, physiological source signal. . . Temporal dynamics of HGA The obtained temporal dynamics support the concept of categorizing HGA into different types corresponding to cognitive or behavioral tasks. Specifically, the rise times of smooth HGA (median: 114 ms; IQR: 83-157 ms) are considerably longer than those of burst HGA (83 ms; 66-118 ms) and pulsed HGA (90 ms; 71-127 ms). In addition, pulsed HGA has a consistently short trial duration (median: 444 ms; IQR: 345-572 ms), which contrasts with the relatively wide trial duration range of smooth HGA (616 ms; 444-913 ms) and burst HGA (513 ms; 313-868 ms). These rise times and durations are valuable information for designing experimental protocols. For example, our analyses have shown that the pace of the visual perception task (stimulus duration: 200 ms, one trial per second) was too fast for some subjects and therefore produced contaminated resting-state baseline segments. Interestingly, we did not observe significant amplitude differences between HGA types. This finding is important for functional mapping applications, e.g., for adjusting significance thresholds. The range of our obtained rise times contradicts results from recent research, which reported much faster HGA transients . We addressed this contradiction in an additional analysis provided in Supplementary material (Section 4). Surprisingly, the results of this analysis strongly suggest that the sharp HGA onset peaks produced by Coon and Schalk are noise artifacts. These findings should be addressed more thoroughly in future work. . . Methodological consistency It is essential to ensure that our findings are methodologically consistent. For this reason, we performed additional HGA characterizations with different HGA estimators and compared the results. In such an additional HGA characterization, for example, we disabled spectral whitening or used the Hilbert transform instead of log band power estimates. Our results confirmed the strong impact of spectral whitening on the high-gamma frequency band analysis, which we already expected and discussed in Section 4.1. In addition, our analyses confirmed that the HGA bandwidth is well below 10 Hz, regardless of whether spectral whitening is enabled or the Hilbert transform is used. Finally, all considered HGA estimators yielded similar temporal dynamics of HGA (except for the amplitudes of the Hilbert transform, which are inherently larger since no log transform is used). We provide more details in Supplementary material (Section 3). Overall, these additional HGA characterization analyses yielded expected results, confirming the methodological consistency of our approach. . . Limitations and remaining challenges Our HGA characterization study includes three cognitive and behavioral tasks. To our knowledge, this is the most extensive experimental coverage ever considered in a single ECoG study; however, it is limited given the large number of tasks ever performed in ECoG experiments. A further limitation is that our study covers only one experimental protocol per task, and these protocols differ considerably. For this reason, we could not investigate the impact of the experimental protocol (e.g., stimulus type, duration, pace, intensity). However, such an investigation would have been beyond the scope of our study due to the enormously increased complexity. For simplicity and clarity, we assumed a direct correspondence between cognitive and behavioral tasks and HGA type. Unfortunately, this relationship is ambiguous in reality. For example, we associated visual categorization tasks with pulsed HGA. However, pulsed HGA might be as well produced in a listening task (e.g., short words or auditory beeps) or a motor control task (e.g., by rapid and discontinuous hand movements). Similarly, processing a continuous stream of visual information is still a visual perception task, but might produce burst HGA or even smooth HGA. To resolve this ambiguity and avoid misconceptions, the results of our quantitative HGA characterization must be associated with either cognitive/behavioral tasks or HGA types. Therefore, we associate the high-gamma frequency band with cognitive/behavioral tasks rather than HGA types because the high-gamma frequency band is independent of the HGA time course (and, consequently, independent of the HGA type). In contrast, the HGA bandwidth and the temporal dynamics of HGA are characterized based on the HGA time course, so we associate these characteristics with HGA types rather than cognitive/behavioral tasks. . . Outlook and further work Further studies are needed to complement our findings. For example, it is important to better understand the mechanisms that govern the variation of the high-gamma frequency band and the HGA bandwidth across individual subjects in some cognitive or behavioral tasks. For example, it may be of interest to relate these results to subjects' behavioral or cognitive abilities and disabilities, e.g., intelligence quotient (IQ), reaction times, motor agility, cognitive diseases such as dementia, etc. Further studies should also address the variety of HGA types that can be produced by the same cognitive or behavioral task under different experimental conditions. In particular, such a study should evaluate the effects of experimental design parameters on the corresponding HGA. We also recommend including a broader range of cognitive and behavioral tasks (e.g., sensory, expressive language, and mental tasks) to expand experimental coverage. From a technological perspective, we suggest translating our results to other established invasive recording techniques such as sEEG. . Conclusions In this work, we performed a thorough characterization of HGA in ECoG signals. This characterization showed, for the first time, that the high-gamma frequency band strongly varies across subjects and cognitive and behavioral tasks. We further observed that transients in HGA time courses are band-limited to 10 Hz. The task-related rise time and duration of these HGA time courses depend on the individual subject and the performed cognitive or behavioral task. Interestingly, the task-related HGA amplitudes are comparable across the investigated tasks. All these findings are of high practical relevance, as they provide a systematic basis for optimizing experiment design, acquisition and processing of ECoG signals, and HGA estimation. At the same time, our results reveal previously unknown characteristics of HGA, the physiological principles of which remain to be investigated in further studies. Data availability statement The datasets presented in this article are not readily available because, data-sharing agreements must be approved by both the Institutional Review Boards of Albany Medical College and Asahikawa Medical University. Requests to access the datasets should be directed to JG, gruenwald@gtec.at. Ethics statement The studies involving human participants were reviewed and approved by Institutional Review Boards of Albany Medical College and Asahikawa Medical University. All subjects in this study voluntarily participated in the research experiments, and written informed consent was obtained from each patient before participating in the study. Author contributions JG: conceptualization, formal analysis, and writing-original draft. JG, CK, SS, and PB: methodology and data curation. JG and SS: software, validation, and visualization. JG, CK, and PB: investigation. KK, PB, and CG: resources. JG, CK, SS, and PB: data curation. CK, JS, KK, PB, and CG: writing-review and editing. KK, PB, and CG: supervision and project administration. PB and KK: funding acquisition. All authors contributed to the article and approved the submitted version.
v3-fos-license
2022-05-17T13:49:51.505Z
2022-05-16T00:00:00.000
248816735
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-022-07412-9.pdf", "pdf_hash": "2c6d5cb85457b744e516346e61db2b48e0024438", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1362", "s2fieldsofstudy": [ "Medicine" ], "sha1": "00d3826d6521c3cabce1078c0635f315ae3b7b3e", "year": 2022 }
pes2o/s2orc
Endolymphatic hydrops in the unaffected ear of patients with unilateral Ménière’s disease Purpose Current studies show that frequency tuning modification is a good marker for the detection of endolymphatic hydrops (EH) employing magnetic resonance imaging (MRI) in patients with Ménière’s disease (MD). The purpose of the present study is to analyze the auditory and vestibular function with audiometric and vestibular-evoked myogenic potentials (VEMP) responses, respectively, in both the affected and unaffected ears of patients with unilateral MD using MRI as diagnostic support for the degree of EH. Methods We retrospectively reviewed the medical records of 76 consecutive patients with unilateral definite MD (age 55 (28–75); 39 women, 37 men). MRI was used through intravenous gadolinium administration, audiometry, and VEMPs. Functional tests were performed up to a week after the MRI. All were followed up one year after imaging utilizing clinical, auditory, and vestibular testing to rule out bilateral involvement. Results In the unaffected ear, the mean pure-tone average is normal even in cases with hydrops and, for a similar severity of hydrops is significantly lower than in the affected ear. Significant differences for the amplitude of the response at 0.5 kHz, at 1 kHz between the affected and unaffected ears were found to be lower in the affected ears. The relative amplitude ratio (1 Kz–0.5 kHz) was significantly lower in the affected ear and in the case of the oVEMP response depends on the degree of EH. The response in the unaffected ear was not modified by the presence or the degree of hydrops. Conclusion In the unaffected ear, hydrops is not associated with hearing deterioration. For a similar degree of hydrops, hearing loss is significantly greater in the affected ear. The endolymphatic hydrops in the vestibule induces a frequency bias in the VEMP response only in the affected ear and not in the unaffected ear. Because of these findings we consider that hydrops does not represent an active disorder in the unaffected ear. Introduction The vestibular evaluation of patients with any type of dizziness such as in Ménière's disease (MD) has undergone a major change since the introduction of new tests that analyze the reflexive response to sudden angular movements as in the video head-impulse test (vHIT) or to low sounds, skull vibrations or galvanic stimulation as in vestibular-evoked myogenic potential (VEMP). VEMP can be recorded below the eye as close as possible to the inferior oblique muscle (ocular VEMP, oVEMP) or on the surface of the sternocleidomastoid muscle (cervical VEMP, cVEMP); the former gives the response mainly from 1 3 the utricle of the contralateral side and the later mainly from the saccule of the ipsilateral side. In patients with MD, the VEMP response depends on certain characteristics of the disease [1,2] and of the test methodology [3]. In cVEMPs that depends also on the test frequency: there is an increase in the threshold of the response for the low frequency (0.5 kHz) or "altered frequency" tuning [4]. The abnormal response to different frequencies and the normalized p13-n23 amplitude and VEMP inhibition depth have been considered a good marker of Ménière`s disease for the detection of suspected asymptomatic hydrops in the saccule [5]. This finding has also been obtained to a lesser degree in the unaffected ear of a small group of patients with unilateral MD [6]. The difference in the tuning properties of patients with unilateral MD has also been shown when the amplitude of the response is the variable in study [7]: the amplitude of the response to 0.5 kHz is lower than expected [8], when the relative value of amplitudes obtained at 0.5 kHz and 1 kHz (cVEMP AR0.5/1 ) were considered [9]. It is a good indicator of a recent attack of vertigo as the response becomes more abnormal in comparison with that found in patients who are stable or without a documented attack close to the day of testing [10]. Frequency tuning modification and absent response are also effective in detecting endolymphatic hydrops (EH) by means of magnetic resonance imaging (MRI) in patients with MD [11]. In this work, we shall analyze the (air-conducted VEMP) AC-VEMP response in both the affected and unaffected ears of patients with unilateral MD. We shall study cervical as well ocular VEMP given also that the information in the later type of VEMP is scarce. The hypothesis is that considering the VEMP AR0.5/1 reduction as an indication of an abnormal function in the inner ear of patients with MD, the finding of hydrops in the unaffected ear will have functional relevance if that shifting in the tuning properties of the VEMP is also found. Inclusion criteria The patients in this study were diagnosed with unilateral MD and fulfilled the criteria to be considered as "definite" according to the latest criteria [12]. None of the patients had been previously treated with intratympanic medication or surgically. The auditory function and vestibular tests were performed the same day and within one week from the MRI. All were followed up one year after imaging by means of clinical, auditory, and vestibular testing to rule out bilateral involvement. Exclusion criteria VEMP: when the latency of any of the wave components (p13 or n23 in the cVEMP or n10 or p16 in the oVEMP) was outside the expected interval in either of the evaluated ears [3]. Demographic data included age, sex, duration of the disease (years since the first typical episode), number of vertigo crises in the 6 months before evaluation (N) and activity of the disease, defined as days since the most recent typical vertigo crisis. Bedside vestibular examination included ocular motility, bedside VOR test, and nystagmus. Since no novel or exceptional interventions were performed in this retrospective database study, only the approval of the local ethical committee from the ENT department of the institution was required in accordance with applicable state laws. The present study was conducted in accordance with the tenets of the Declaration of Helsinki. All patients gave written consent before participating. VOR evaluation This was performed with a video system (vHIT GN Otometrics, Denmark). The parameter evaluated was the VOR mean gain for head impulses on the affected (Gaff) and unaffected side (Gnaff). VEMP calculation of amplitude. The number of recordings made per subject was based on the reproducibility of the observed response. In those cases in which the response was absent, the mean amplitude was considered null (0 µV). To calculate the interaural asymmetry ratio (IAAR), the mean null values were artificially set at 1 µV, as in described previous work [14]. Calculation of the IAAR. It was calculated in accordance with the following formula: Calculation of the 0.5/1 kHz amplitude ratio (VEMP AR0.5/1 ). The amplitude ratio for each ear (affected and unaffected ears) was calculated in accordance with the following formula: As an example, the results in the cVEMP of a patient with a unilateral MD affecting the right ear is shown in Fig. 1. Evaluation of endolymphatic hydrops with MRI All MRI studies were performed in two 3 Tesla magnets, either a Siemens Magnetom Vida (Siemens Healthineers, Erlangen, Germany) with a dedicated Siemens 20-channel head coil or a Siemens Magnetom Skyra with a dedicated Siemens 32-channel head coil. The MRI hydrops dedicated sequence employed was the 3D Inversion Recovery with REAL reconstruction (3D REAL-IR) as described by Naganawa et al. [15]. This sequence was carefully chosen instead of the other hydrops sequence widely available, the 3D "Fluid attenuated inversion recovery" (FLAIR) [16]. For anatomical purposes, a heavily T2-weighted cisternography sequence was also obtained. Images were obtained 4 h after a single dose of intravenous Gd administration (Gadovist; Bayer-Schering Pharma, Berlin, Germany; 1.0 mmol/mL at a dose of 0.1 mmol/kg). The whole imaging protocol took about sixteen minutes and consisted of: A heavily T2-weighed sequence (T2 3D SPACE (Sampling Perfection with Application optimized Contrasts using different flip angle Evolution) with the following parameters: section thickness, 0.5 mm; TR, 1400 ms; TE, 152 ms; flip angle, 120°; bandwidth, 289 Hz/pixel; voxel size, 0.5 × 0.5 × 0.5; and scan time, 5 min. The 3D-IR: Fig. 1 Representative cVEMP data in a 50-year-old male patient with unilateral definitive Meniere's disease in the right ear. The affected ear shows an increased wave amplitude for the frequency of 1 kHz compared to that in 0.5 kHz. The IAAR was 7.56% for the 0.5 kHz test and − 53% for the 1 kHz test. The VEMP AR0.5/1 was 0.39 in the affected side and 1.47 in the left or unaffected side. cVEMP, Vestibular-evoked myogenic potential; MD, Meniere's disease 1 3 section thickness, 0.8 mm; TR, 16,000 ms; TE, 551 ms; TI: 2700 ms; flip angle, 140°; bandwidth, 434 Hz/pixel; voxel size, 0.5 × 0.5 × 0.8; and scan time, 11 min. Two very experienced head and neck radiologists qualitatively evaluated the MR images. Cochlear endolymphatic hydrops (EH) was qualitatively assessed using a three-grade scale (none, moderate, severe) with an axial plane at a midmodiolar level [17]. For the evaluation of vestibular EH a four-grade scale was employed (none, slight, moderate, severe) [18,19]. Statistics To compare the amplitudes and VEMP AR0.5/1 between the groups, parametric and non-parametric tests were used. The normality of the quantitative variables was studied with the Shapiro-Wilk Test. The non-parametric Kruskal-Wallis test for comparison between three and four groups was used. The correlation was calculated using the Spearman rank correlation coefficient for non-parametric variables. The descriptive statistic is expressed as median (p25, p75). All of the statistical analyses were performed with Stata 12 (StataCorp, College Station, TX). Results In this work, we have included 76 patients, of which 39 (51%) were women and 37 (49%) were men. The right ear was affected in 30 patients and the left in 46. Mean age was 55 years (28-75), mean disease duration was 5 years [95% Confidence interval (CI 95) 3.7-6.6], mean number of days since the last vertigo spell was 42 (CI 95 24-60) and the mean number of vertigo spells in the previous 6 months was 6 (CI 95 5-7). The mean PTA 0. was in the affected ear 49 ± 21 dB and in the unaffected ear 15 ± 11 dB. The mean threshold for 250 Hz was in the affected ear 54 ± 21 dB and in the unaffected ear 15 ± 10 dB, and the mean PTA 4-6 was in the affected ear 58 ± 23 dB and in the unaffected ear 32 ± 34 dB. On vestibular examination, spontaneous nystagmus was found in 28 patients and the VOR was considered abnormal (both at bedside and with vHIT evaluation) in 20 patients. In Table 1, we present the data for the mean PTA 0.5-3 according to the severity of hydrops. There is a clear tendency in the mean PTA 0.5-3 to become higher as the severity of hydrops increases in the affected ear. For a similar degree of cochlear hydrops, the PTA 0.5-3 is significantly higher in the affected ear (for moderate severity the statistical assessment is invalid); the same occurs for vestibular hydrops (for severe hydrops also the statistical assessment is invalid). After performing VEMP testing and according to inclusion criteria, oVEMPs were considered for evaluation in 57 patients and cVEMPs in 61. In the former group and in the affected ear, hydrops was seen in the cochlea of 43/57 and in the vestibule of 49/57; while in the unaffected ear, these data were in the cochlea and vestibule 6/57 and 12/57, respectively. In the second group (those with recognized response in both ears) and in the affected ear, hydrops was seen in the cochlea of 47/61 and in the vestibule in 53/61; while in the unaffected ear, these data in the cochlea and vestibule were 7/60 and 11/60, respectively. The proportion of hydrops in the cochlea and vestibule was not significantly different in the patients with recognized oVEMPs or cVEMPs both in the affected and unaffected ears. In Table 2, we present the mean data of the VEMP response in the affected and unaffected ears: the amplitude of the response at 0.5 kHz, at 1 kHz, and the relative value of their amplitudes. Differences were significant for the three measures (lower amplitude of the response and VEMP AR0.5/1 in the affected ear) in the case of the oVEMP and only for the 0.5 kHz for the cVEMP. It is interesting to note that the mean IAAR is far from abnormal regarding our database of normal subjects. The relevance of hydrops in each ear (with and without hydrops) was evaluated with the VEMP AR0.5/1 . A significant difference was only obtained in the case of cVEMP and cochlear hydrops as shown in Table 3, which indicates that there is no frequency bias. In Table 4 and Fig. 2, we present the data of the VEMP AR0.5/1 in the affected (Fig. 2a) and unaffected (Fig. 2b) ears. As expected in the affected ear, hydrops induces a significant dysfunction as shown by the differences when hydrops was detected except for cVEMP and vestibular hydrops. However, in the unaffected ear, the VEMP AR0.5/1 is not significantly different whether hydrops was detected or not. The result in the oVEMP AR0.5/1 was evaluated in more detail in the case of the affected ear and, as shown in Fig. 3, we observe how this becomes lower as the severity of hydrops increases. Discussion The motivation for this study came first from the consideration of four possible scenarios that we commonly face at present when dealing with patients with "definite" unilateral MD after MRI evaluation disease. The most common scenario is when cochlear or vestibular hydrops are detected in the MRI only in the affected ear as occurs in 70% of the patients in our study. The second is when hydrops is detected in both ears owing to simultaneous cochlear or vestibular EH as seen in 25% of our patients. The third and fourth scenarios (EH only in the unaffected ear, or no hydrops in either ear) are markedly unexpected: 1% and 4%, respectively. EH in the affected ear The amount of hydrops in the affected ear of patients with unilateral MD is in the higher range of what has been reported by previous authors [20], but, however, is similar to what has been reported in otopathology reports. Our 94% EH detection, when both cochlea and vestibule are considered, has two main reasons: the population under study and the technique itself. Our population was made up only of patients who fulfilled the criteria for "definite" unilateral MD according to the most recent criteria and were very homogeneous in terms of disease duration. This can be considered as medium and is important because it could influence the severity of hydrops as seen in the MRI: longer disease duration is associated with more severe EH [21], although there are reports that do not agree on this association [22]. As a limitation to our study, the precise characteristics of initiation of the disease was not noted. In the case of recent onset disease, the clinical presentation (which differs very much between patients) is probably another source of variability. It has been shown that in 61% patients, auditory symptoms occur first (months before the first vertigo crisis) and complete (auditory plus vestibular) after, but that the opposite (vestibular first) occurs in 18%; both appear simultaneously in 21% [23]. The second cause of our high detection rate is that EH with MRI can be over-diagnosed according to reports in normal subjects [24]. EH in the unaffected ear Hydrops in the unaffected ear was found in 20/76 (26.3%) of patients and was more frequent in the vestibule: in seven patients, hydrops was found both in the cochlea and vestibule, in three in the cochlea and in 10 only in the vestibule. In one patient, hydrops was only seen in the unaffected ear: this is a 50-year-old female with a history of MD in her left ear of one year duration when the MRI was performed; she was also diagnosed with migraine with aura, but the last attack of migraine took place almost 1 year before MRI. It is well known that EH is mostly related to cochlear dysfunction in cases of vestibular migraine with auditory symptoms which is related to the degree of cochlear or vestibular hydrops [25]. The number of patients with hydrops in the unaffected ear is very similar to that reported by others using MRI to detect EH and it considered to be part of a more severe disease or with a longer duration [26]. Is also similar to the number of patients expected to develop bilateral MD [27][28][29][30]. This is by no means a consistent argument when deciding that a particular technique showing that number could eventually identify potential bilateral MD patients in advance, mainly when considering the extreme differences in studies addressing the incidence of bilateral MD [7]. Also, we have to take into account that 20% of patients with unilateral MD show EH in the unaffected ear at postmortem examination [31]. In our study, we were, therefore, interested in analyzing hearing and vestibular function: the former by means of audiometric findings and the later with AC-VEMP. And the question was whether EH in the contralateral also indicates auditory or vestibular dysfunction in that "normal" ear. Here, we have shown that for a similar degree of hydrops in both ears, there are significant differences in the amount of hearing damage: the PTA is higher in the affected ear when cochlear and vestibular hydrops were mild and when vestibular hydrops was moderate. In the other case of moderate cochlear hydrops, there were not enough unaffected ears as to statistically compare results. In the case of "no hydrops" at all in both ears, there are also differences in the PTA that continues to be significantly higher in the affected ear. This can be explained by different hypotheses. In the first case, the inability to detect subtle changes in the cochlea (of the unaffected ear) with current methodology [32]. Use of the intratympanic route for gadolinium administration [33] or electrocochleography could both be methods to better analyze that situation and better characterize those ears 1 3 [34]. In second place, absent EH in the affected ear with abnormal hearing could also indicate that hydrops is not the only relevant change to symptomatology [35] as we know occurs in the contrary, well-developed disease [36]. Inclusion and exclusion criteria were set so as to have patients with consistent responses or VEMPs and for this reason, the initial number of patients was reduced. The patients who were excluded did not share any specific characteristic in terms of clinical parameters. The severity of hydrops was also randomly distributed in that group. The number of patients now under study probably explains that some results were not congruent [37]. As expected, and in accordance with previous mentioned findings, we found lower VEMP AR0.5/1 in the affected ear as compared to the unaffected ear [38]. This difference between ears was significant in the case of the oVEMP but not with cVEMP. As shown in Table 2, this is because the amplitude of the response in the affected ear was significantly lower than in the unaffected ear for the 0.5 kHz stimulus but not for the 1 kHz in cVEMP [8]. In the case of the oVEMP, the same occurred, but the amount of amplitude difference between the affected and unaffected ears for the 1 kHz stimulus was lower in proportion but becoming significant. This paradoxical behavior needs to be corroborated also with boneconducted stimulus and in larger studies because it does not match the well-known data form experimental work on saccular afferents threshold [39]. With the possibility of analyzing results in accordance with the degree of EH, we show that when hydrops is found in the cochlea, there is a more severe dysfunction in the affected ear as indicated by frequency tuning. In the case of hydrops in the unaffected ear, we have not found EH to be related to significant differences in the value of the 0.5/1 kHz amplitude ratio. For this reason, we consider that, when dealing with patients with unilateral MD, VEMP testing must be part of the laboratory evaluation given its ability to detect more subtle changes in EH [40]. These findings must not be overlooked and should be integrated into the final decision on treatment with patients who are not doing well, and when an ablative or semi-ablative treatment is considered for the affected ear. Conclusions EH occurs in patients with unilateral MD more frequently only in the affected ear but can also be found in both ears. In the unaffected ears hydrops is not associated with hearing deterioration: for a similar degree of hydrops in the affected and unaffected ear, hearing loss is significantly greater in the former. The amount of vestibular dysfunction as shown by the 0.5/1 kHz amplitude ratio needs to be part of the evaluation during follow-up to better acknowledge its relevance in EH development in the unaffected ear. Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. The study was supported by ISCIII (19/00414), co-funded by ERDF, "A way to make Europe"Instituto de Salud Carlos III. Ministerio de Ciencia, Innovación y Universidades. Government of Spain. Conflict of interest The authors have no funding, financial relationships, or conflict of interest to disclose. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Fig. 3 The oVEMPAR0.5/1 according to the severity of vestibular hydrops in the affected ear. Asterisks indicate significant differences (*p < 0.05, **p < 0.01)
v3-fos-license
2019-03-28T13:14:32.923Z
2019-01-01T00:00:00.000
86478835
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/15/matecconf_iconbee2019_03006.pdf", "pdf_hash": "a5ad470af1e6d5dcee93dd2e0afd9780e58a25cf", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1363", "s2fieldsofstudy": [ "Business" ], "sha1": "6172ae0298cfde456f93dfc65be7b0dcfb112668", "year": 2019 }
pes2o/s2orc
Reliable Description of Preliminaries Item Using Civil Engineering Preliminaries Protocol (CEPP) In Conventional Contracts . Civil engineering work deals with nature and thus, is exposed to enormous discrepancies due to nature’s complexity compared to building works which are more certain. In the Malaysian construction industry, it is generally accepted that civil engineers administer civil engineering contracts and prepare tender documents. The Civil Engineering Preliminaries Protocol (CEPP) for conventional contracts is an ongoing research that deliberates on the cost-related items included in the Preliminaries. Preliminaries are subjective in nature and largely challenging to price. This paper considered previous research findings by conducting a literature review and accordingly, highlighted the problem statements raised on the contractual risks present due to the fallacy of item description. The identification of underlying problems and gaps within the area of study justifies the aim of the research to establishing a common protocol that is conversant to both engineers and contractors. The objective of the protocol is to eliminate disputes due to vagueness, ambiguities, and duplication of preliminary items in order to improve price accuracy. In practice, different approaches are taken by engineers and contractors in dealing with preliminary items. Engineers provide bills of preliminaries and contractors price them accordingly without establishing any mutual understanding and responsibility for risks. Conventional contracts prohibit contractors to provide their own preliminaries. Contractors instead have to obtain clarification on any ambiguities in the contract within the speculated time given during the tender period. As a way forward, the CEPP provides better clarity, accuracy, and transparency to engineers and contractors as well as the other construction players in general. Reliable descriptions of preliminary items ensure better price accuracy for the betterment of the construction industry. Introduction Civil engineering deals with the built environment and encompasses the design and construction of infrastructures such as bridges, dams and other civil works [1][2][3]. It also deals with nature and thus, is exposed to enormous discrepancies compared to building works which are more certain. Preliminary items are subjective in terms of costs and largely challenging to price [4][5][6][7][8][9]. In Malaysia, the cost of preliminaries for civil work is between three to seven percent of the construction cost [10], but is unlikely to exceed 10% [11]. In the Malaysian construction industry, it is usual for civil engineers to administer civil engineering contracts [1][2][3][4]. It is generally accepted that engineers have to prepare tender documents [12]. Detailed descriptions of preliminary items is a crucial part of the tender to ensure correct pricing [13]. For conventional contracts, engineers are generally the originator of bills, such as bills for preliminary items; these bills are then handed to contractors to price [14]. As such, contractors have no right to interfere during the preparation of the document. Various approaches have been used by both engineers and contractors in preparing/construing the preliminary items list [10], [15]. Usually, engineers establish the bills of preliminaries for contractors to price accordingly, and take all the risks [7][8][9]. Nevertheless, contractors can obtain clarification for any ambiguity on the items listed within the tender period [19], [20]. This minimises misinterpretation that may lead to price errors. Preliminary items in Conventional Contracts The reliability of preliminary items has become a subject matter in the local construction industry and overseas due to its complexities [4], [6], [9], [11], [14][15][16][17]. The current procedure entails preliminaries to comprise of all important information, instructions and obligations, yet be descriptive in nature [17]. Preliminaries may also contain irrelevant items that may bring about higher bid prices [7]. Contracts are typically divided into five categories to meet different purposes, namely the traditional general contracts, design and build contracts, management contracts, hybrid contracts and miscellaneous contracts [24]. The traditional general contract is the only one that involves the Bills of Quantities which comprise of trade bills and preliminary items which form part of the contract [24]. The traditional general contract category correlates with the conventional contract and is commonly used in Malaysia since 1931 by the Public Works Department for construction projects [25]. In a conventional contract, preliminaries constitute a fraction of the documents but carry an important financial obligation to contractors alongside with trade bills [8], [26]. Preliminaries and trade bills are parts of the integrated Bills of Quantities (BQ) [3], [9][10][11]. Figure 1 shows an example of a preliminary that is generally included in conventional contracts. Underlying Issues of Preliminaries In conventional civil engineering contracts, it is prerogative that the engineer establishes the preliminary items and trade bills to constitute the Bills of Quantities (BQ) [6], [17], [27]. Engineers have used various means ranging from bespoke methods to more organised means. During the tender stage, contractors use their own interpretation and judgement to price the items [28], [29] irrespective of the items' reliability. Preliminary items are the most difficult to price as its pricing is arbitrary [29]. Contractually, contractors take full responsibility of the bills although they are not the ones pricing them. The approaches taken by the two exponents are distinct, and there is no common platform that can be used to provide a mutual understanding between engineers and contractors; thus, causing huge price variations [16], [17], [21]. A protocol, namely the Civil Engineering Preliminaries Protocol (CEPP) has been established to promote the reliability of preliminary items' descriptions in order to avoid disputes and establish better price accuracy. The description of items has garnered a lot of attention due to its susceptibility to discrepancies, repetitiveness, insufficient deliberation, manipulation, arbitration and most of all, to be definitive [7], [29][30][31]. Various Methodologies Used by Engineers Various methodologies have been adopted by engineers to establish the preliminary items that form a part of the BQ. Due to the complexity and uncertainty of civil works, engineers would usually adopt the descriptions of preliminary items from past projects [28] with modification in order to generate the general bills of a project based on their best assumptions and experiences. Engineers do exploit the descriptions of preliminary items from past projects for new projects [7]. This is easily understood as an easier way to conclude the description of cost-related preliminary items. This is typically done due to either insufficient preparation time given to prepare the tender documents, or having a past project of a similar nature. This action, if not cautiously measured for necessary adjustment to suit a project's needs, may lead to discrepancies in the items' description [31]. Lack of information may also lead to the omission of important items which thus may incur additional contract costs due to additional orders [27][28][29]. Engineers expect contractors to be meticulous in their pricing of preliminaries, and to price all included items [6], [16], [35]. However, a problem exists in most cases where the description of items is not adequately deliberated on in order to specify the requirement [7]. Though contractors are required to raise queries pertaining to incomplete or ambiguous information before the closing of a tender, certain contractors opt for passive action and prefer to be silent in hope that the issue(s) raised will be resolved in time [7]. The writer opines that it is the engineers' responsibility to provide a reliable and clear description of the preliminary items, particularly for cost-related items. Figure 2 depicts the present common approach taken by engineers to conclude preliminaries. There are several formats used in the present practice adopted by engineers ranging from the simplest to the most tedious forms of preliminaries [15]. The writer suggests the usage of a codified format by means of a common protocol to minimise discrepancies and promote better transparency. Various Approaches by Contractors to Construe Preliminaries Contractors are obliged to price preliminaries in compliance with a tender's requirements within a specific tender period. During the process, various approaches are taken to construe and anticipate the workload, timespan, direct and indirect costs, and requirements for suppliers or sub-contractors. Contractors use the pretext of a project's complexities to react accordingly to the bills of preliminaries. Site information obtained during site reconnaissance which is either organised by the employer or on the contractor's own initiative is important [32]. Contractors predominantly price cost-related preliminaries based on their own ideas and anticipation [28], [29]. Nonetheless, they depend entirely on the reliability of item description as prepared by the engineers. Despite their experience, contractors usually have systematic approaches to this process such as anticipating all possible site constraints, pricing strategies with the latest price statistics and their own experience in a similar project environment, striking good deals with local labours and suppliers, and working with experienced managers [32]. Figure 3 depicts contractors as the sole receiver who complies with a tender or contract's requirements based on their own interpretations. Fig. 3. Typical approach by contractors in interpreting preliminaries Contractors are expected to price all items included in the preliminaries despite the complexities and uncertainties [16]. The contractors in return take drastic actions to minimise their risks by jacking up the prices of other items [7], or instead accept the risks by charging a lower price with a reduced profit margin due to desperation to secure a contract [32]. Contractors seldom take proactive actions when dealing with incomplete information. They normally stay silent and expect the issue to resolve itself over time [7]. Contractors simply comply with the term of tender or "compliance to the bid" during the tender exercise to avoid unnecessary troubles that may reduce their chances of winning the contract bid. Research Methodologies A literature review was carried out on 41 research or academic papers from multi-platforms not limited to Scopus, Google Scholar, conferences, symposiums, and online journals. References to two online newspapers for issues not available from previous research were also carried out. A qualitative analysis approach was adopted. The protuberant issues were discussed to arrive at a coherent explanation. Problem statements were generalised with emphasis on the attitudes of engineers and contractors. The objective is to find the imperative criteria necessary to institute a reliable description of preliminary items. The findings set forth a way forward for a common protocol conversant to the construction proponents. Findings and Discussions Items of preliminaries are arbitrary and thus, are difficult to be given accurate price justification [29]. The writer agrees that as preliminaries tend to contain all particulars namely general information, contractual items, specific project or client's requirements, temporary work, and reminders about the obligation of the contractors, it is therefore common that the preliminary item form contains general information, general instructions and recurring items intermingled with each other. Unrelated information unnecessary to the course of the project are also usually incorporated [7]. Such unrelated items need to be discarded. The approach needs to be simplified for easy understanding, thus making the items more definitive in description. The present approach to conventional contracts does not require early contractor involvement (ECI) during the preparation of the tender documentation. Contractors are only required to submit the price bid during the tender stage. Instead of the present approach, contractors should be invited to participate at the early stage in order to provide construction advice and early identification of risks [36]. This would be a theoretical approach that can be adopted to improve the quality and reliability of preliminary items' descriptions. This approach may be advantageous for medium and large projects in terms of the development costs involved. However, its implementation for small scale projects may not be feasible. The measurement of cost-related preliminary items is a serious challenge as it should be codified and made familiar to both contractors and engineers [7]. The terminologies, construction techniques, descriptions of items and measurement approaches used by the engineers must be clear, transparent and precise. All matters included in the preliminaries must be definitive in order to provide considerable understanding for the contractors to price [30]. The need for a common protocol is prerogative to bring together the engineers and contractors for mutual understanding of what the other does and anticipates. Figure 4 emphasises the importance of a common protocol to be adopted in order to establish reliable preliminaries which will eventually lead to a reliable tender price. Fig. 4. Common Protocol Influencing Reliability of Preliminaries and Tender Price Consideration should be given to ensure all vague or indeterminate areas are sufficiently clarified, and due effort should be given to exclude uncertainties from the tender documentation [37]. The writer opines that engineers should meticulously conduct the due diligence study and acquire sufficient practical experience of similar nature of work in order to be able to prepare precise preliminary item forms. Another important factor that should be considered is coordination between the description of preliminary items and the trade bills in order to avoid duplication or undue conflicts or omissions [38]. Different approaches and understandings to the preliminary items between engineers and contractors often cause unnecessary disputes [4]. The writer anticipates that disputes may occur if the description of preliminary items is vague or/and contractors purposely manipulate the price of specific items for their own advantage. It becomes worse if the manipulation is done by the employer's party, namely the engineers and is complicated by misunderstandings on the side of the contractors [33]. It is common practice in Malaysia and other countries that cost-related preliminary items are measured entirely in lump sum [20], [21], [39] with long-winded descriptions. Unfortunately, the breakdown of items is not mandatory, thus the components of the lump sum price are not verified. For such circumstances, contractors provide unverified or arbitrary lump sum figures on the preferred items based on their own interpretation and anticipation [16], [29]. However, through a more drastic approach, accurate and detailed information on the itemised preliminaries may provide better results and make the items easier to price [7]. The breakdown of items is therefore important to the attainment of the objective [23], [35], [40]. Indirect costs e.g. off record 'compensation' or 'protection money' [41], [42] and on-site authority requirements are usually the main contributors related to the cost of preliminaries during construction. Despite being important, contractors are unable to ascertain the extent of its impact at the point of tender, thus are seldom considered as an integral part of the contract. Though intangible in nature, indirect costs cannot be neglected as a critical cost component associated with the construction process [43]. The preliminary items for indirect costs can be used to cushion some unexpected cost impacts. Conclusions It is construed that engineers and contractors have differences in establishing/construing the preliminary items. The description of preliminary items must be definitive in order to provide considerable understanding for contractors to price, hence promote better price accuracy. Reliable descriptions of preliminary items that are easy to comprehend lead to reliable tender prices. As a way forward, a common understanding between the parties in particular and other construction proponents in general through a codified protocol is deemed important. The Civil Engineering Preliminaries Protocol (CEPP) is an on-going research that leverages on the approaches of both engineers and contractors. Differences are mediated by studying the breakdown of items, unit of measurements, use of terminologies, construction techniques, and definitive descriptions of each item. The modification and introduction of new elements such as provisional sum, unexpected indirect costs such as compensation, Early Contractor Involvement (ECI) and Early Stakeholder Involvement (ESI) are among the main elements given attention. The criteria for reliable description of preliminary items pragmatically include but are not limited to; (1) cautious adoption from past projects, (2) relevance, (3) definitiveness, (4) easy comprehension, (5) elimination of discrepancies, repetitiveness, insufficient deliberation, manipulation, and arbitration, (6) clear, transparent and precise description, (7) avoidance of uncertainty, (8) coordination with trade bills to avoid undue problems, (9) breakdown of items to provide clarity and accuracy, (10) indirect costs as a critical cost component, and (11) measurement of cost-related preliminary items codified and familiar to contractors and engineers. In a nutshell, the CEPP should contain the above criteria to provide better clarity, accuracy, and transparency to engineers and contractors as well as the other construction proponents in general. Reliable description of preliminaries item ensures better price accuracy for the betterment of the construction industry.
v3-fos-license
2023-03-19T15:01:54.613Z
2023-03-17T00:00:00.000
257615184
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2023.1103147/pdf", "pdf_hash": "3a024dc9ca557a2379fed14dbf0afecdd92301d9", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1364", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "e9bf864da0acebf46ea96029345bad4c6306391a", "year": 2023 }
pes2o/s2orc
Immunotherapy or targeted therapy: What will be the future treatment for anaplastic thyroid carcinoma? Anaplastic thyroid carcinoma (ATC) is a rare and aggressive form of thyroid carcinoma (TC). Currently, there are no effective treatments for this condition. In the past few years, targeted therapy and immunotherapy have made significant progress in ATC treatment. Several common genetic mutations have been found in ATC cells, involving different molecular pathways related to tumor progression, and new therapies that act on these molecular pathways have been studied to improve the quality of life of these patients. In 2018, the FDA approved dabrafenib combined with trametinib to treat BRAF-positive ATC, confirming its therapeutic potential. At the same time, the recent emergence of immunotherapy has also attracted wide attention from researchers. While immunotherapy for ATC is still in the experimental stage, numerous studies have shown that immunotherapy is a potential therapy for ATC. In addition, it has also been found that the combination of immunotherapy and targeted therapy may enhance the anti-tumor effect of targeted therapy. In recent years, there has been some progress in the study of targeted therapy or immunotherapy combined with radiotherapy or chemotherapy, showing the prospect of combined therapy in ATC. In this review, we analyze the response mechanism and potential effects of targeted therapy, immunotherapy, and combination therapy in ATC treatment and explore the future of treatment for ATC. Introduction Thyroid carcinoma (TC) is one of the most common types of cancer in the world (1). Anaplastic thyroid carcinoma (ATC) contributes to more than 50% of all TC mortality, despite the fact that it accounts for only approximately 2% of all TCs (2). According to the American Thyroid Association (ATA) guidelines, the conventional treatment for ATC includes surgery, radiotherapy, and chemotherapy. Although intrathyroidal ATC who were treated with total thyroidectomy with high-dose radiation therapy and other chemotherapies show improved survival rates, prognosis is dire in patients with metastatic and progressive ATC (3). Given the poor outcomes of current therapies, the American Joint Committee on Carcinoma has categorized all ATCs as stage IV; tumors (4). In recent years, with the emergence of genome medicine and the theory of tumor immunoediting, an increasing number of clinicians have aimed to cure ATC using targeted therapy and immunotherapy. Compared to traditional therapies, targeted therapy is more helpful in improving treatment effects and ameliorating the quality of life of patients (5). In recent years, with the rapid development of high-throughput sequencing technology, the detection of cancerrelated gene mutations and the development of new targeted drugs that block related signaling pathways have allowed clinicians to flexibly adjust treatment plans. Many common gene mutations have been identified in ATC, such as BRAF, RAS, and P53 mutations (6,7). Many signaling pathways can be targeted, such as the RAS/RAF/ ERK pathway, PI3K/AKT/mTOR pathway, etc. (8) At present, many drugs targeting the above mutations and pathways have been developed to carry out a large number of clinical trials showing various results. Thanks to the pioneering work of the 2018 Nobel Prize winners in medicine, James Patrick Allison and Tasuku Honjo, immune checkpoints and immunotherapy have become popular topics. Currently, immunotherapy is used in the clinical treatment of patients with various cancers, such as non-small cell lung cancer and melanoma, and has achieved good therapeutic effects (9,10). Immunotherapy, especially immune checkpoint blocking therapy, has also made good progress in the treatment of ATC. Currently, immunotherapy is considered a promising strategy by some experts, and some related drugs for ATC are being researched clinically, but there are still many patients who do not respond to immunotherapy or develop treatment resistance that needs to be addressed (11). Simultaneously, scientists have been pleased to discover that immunotherapy drugs combined with targeted therapy drugs can increase the anti-tumor effect of targeted therapy drugs (12). In this article, we review all the clinical studies on targeted therapy and immunotherapy for ATC, looking for the best methods of combining targeted therapies and immunotherapy. Targeted therapy Targeted therapy usually refers to treatments that specifically target molecules related to the tumor formation process and cause less damage to normal cells. Targeted therapies block the activity of specific molecules that are essential for cancer growth and development (13). Most targeted therapies include small-molecule or monoclonal antibodies (14). Here, we present the most common out-of-control signaling pathways in ATC ( Figure 1) and outline the corresponding drugs and their relevant advances in clinical experiments. MAPK pathway The MAPK signaling pathway plays an important role in the occurrence and development of ATC (15). Signal transduction through the MAPK pathway occurs after extracellular growth factor binds to a variety of tyrosine kinase receptors (TKRs), which in turn leads to the activation of RAS (16). RAS is a small GTP-binding protein that exists in three isotypes: HRAS, KRAS, and NRAS. After activation, NRAS binds BRAF to phosphorylate and activate MEK (17). MEK sends a signal to ERK, which enters the nucleus and enhances the transcription of a number of transcription factors, leading to increased cell proliferation and cell survival. Since the discovery of this signaling pathway, FIGURE 1 The MAPK pathway and PI3K/AKT/mTOR pathway in ATC. MAPK and PI3K/AKT pathways in ATC. The MAPK and PI3K/AKT pathways are responsible for angiogenesis, proliferation, and tumorigenesis. Shown are several medications capable of specific mutations, targeting upstream receptor tyrosine kinases, and genetic rearrangements that have completed or are in clinical trials of targeted therapy for ATC. researchers have gradually developed drugs that target this pathway for testing, such as tyrosine kinase inhibitors (TKIs), BRAF inhibitors, and MEK inhibitors. Tyrosine kinase inhibitors At present, it has been found that TKI can achieve its antitumor purpose by inhibiting the repair of tumor cells, blocking cell division in the G1 phase, inducing and maintaining cell apoptosis, and inhibiting angiogenesis (5). Some phase 1 and phase 2 clinical trials have reported the results of tyrosine kinase inhibitor monotherapy for ATC, and the proportion of patients who achieved objective remission was between 0% and 25% (18)(19)(20)(21)(22)(23)(24). Sorafenib is the first oral multi-kinase inhibitor that targets BRAF, rearranged during transfection (RET), KIT, platelet-derived growth factor receptor (PDGFR), and vascular endothelial growth factor receptor (VEGFR) 1-3. The NCT00126568 trial evaluated the effectiveness of sorafenib in anaplastic thyroid cancer (18). Among the 20 recruited patients, the disease control rate (DCR) was 35%, and the median overall survival (OS) was 3.9 months. In this experiment, patients who used sorafenib only experienced grade 3 and grade 4 toxic reactions with mild symptoms. In contrast, the NCT02114658 trial evaluated the safety and effectiveness of sorafenib in Japanese medullary thyroid carcinoma (MTC) and ATC patients and concluded that it seems to be effective for advanced MTC but ineffective for ATC (19). A total of 20 patients were recruited in this study, of whom 8 had MTC and 10 had ATC. Two of the MTC patients had a partial response (PR) (25%), but none of the ATC patients had symptom relief, and only four patients showed stable disease (SD) (40%). To further explore the role of sorafenib in the treatment of ATC, a study on sorafenib as an adjuvant treatment for ATC was carried out (NCT03565536). It was expected that 10 participants would participate in the trial. This trial used sorafenib as adjuvant treatment and surgery if the patient's condition was relieved. Lenvatinib is a representative oral TKIs, targeting VEGFR1-3, fibroblast growth factor receptor (FGFR) 1-4, PDGFR, RET, and KIT. A phase II trial tested the effectiveness and safety of lenvatinib for the treatment of patients with advanced thyroid cancer (20). Of the 51 patients, 17 with ATC were recruited, and most of them received chemotherapy and radiotherapy. Among them, 4 patients achieved remission (24%). This high-response effect in ATC patients is encouraging, and the toxicity is controllable, but the study did not evaluate the correlation between biomarkers and the efficacy of lenvatinib in ATC. Therefore, lenvatinib will be examined as an exploratory endpoint in a phase 2 trial of patients with ATC (NCT02657369, NCT02726503). The current NCT02657369 trial was terminated because there was only one PR (3%) among the 33 recruited patients. The results of the NCT02726503 trial have not yet been reported. In addition to the TKIs mentioned above, other TKIs have also undergone clinical research. Imatinib is an inhibitor of Bcr/Abl, PDGFR, c-Fms, KIT, and RET. A clinical study on imatinib in advanced ATC demonstrated its efficacy and good tolerability (23). Eleven patients with ATC were recruited in this study. Eight of the 11 patients had evaluable treatment effects. Two patients (25%) developed PR, and four patients (50%) developed SD. Sunitinib is an oral multitargeted TKI that inhibits VEGFR1-2, PDGFR, KIT, FMS-like tyrosine kinase-3 (FLT3), receptor of macrophage-colony stimulating factor (CSF1R), and RET. An article evaluated the efficacy and safety of sunitinib in the treatment of locally advanced or metastatic TC (NCT00510640) (22). The study recruited a total of four ATC patients, and the results of the study showed that only one ATC patient achieved SD. Compared with the other two TC subtypes in this study, sunitinib did not change the prognosis of patients with ATC. Gefitinib is an EGFR inhibitor, and a phase II clinical study on gefitinib for the treatment of advanced TC evaluated its efficacy (24). Among the 5 ATC patients recruited in this study, only 1 patient with ATC achieved SD. Pazopanib is a VEGFR, PDGFR, and KIT inhibitor. A phase II study (n=16) of patients with advanced ATC evaluated pazopanib, but one patient withdrew before the start of treatment (21). Although transient disease regression was observed in several patients, no RECIST responses were confirmed. It cannot be proven that pazopanib monotherapy is a reasonable treatment for ATC. In general, these results indicate that the above-mentioned TKIs show moderate single-agent activity in the treatment of ATC. Some clinical trials show that patients may not respond well to the drug because of the small number of patients; therefore, it may be necessary to further study the application of TKI in ATC in a multi-institutional environment. BRAF inhibitor BRAF is one of the first and most studied point mutations in thyroid cancer (25). BRAF is part of the MAPK pathway and is essential for the regulation of cell growth, proliferation, and survival (26). BRAF mutations have been observed in 41% of ATC patients; therefore, targeting BRAF is very important for the prognosis of ATC patients (27). Kristen et al. found that a 51-year-old male patient was diagnosed with ATC, and genetic analysis showed that BRAF had a mutation (28). Subsequently, treatment with the BRAF inhibitor, vemurafenib, was initiated. The patient experienced progressive dyspnea. Comuted tomography (CT) of the chest showed lung infiltration and nodule deterioration. On day 38, 18F-FDG-PET and CT of the chest indicated that the metastatic disease was almost completely eliminated. A phase II clinical study evaluated the role of vemurafenib in a variety of non-melanoma cancers with BRAFV600 mutations (NCT01524978) (29). Among the seven recruited patients with ATC, there was 1 complete response (CR) and 1 PR. Although the number of patients recruited was small, this is notable because of the limited treatment options available for ATC. Furthermore, dabrafenib has also shown antitumor activity in patients with BRAFV600E mutant anaplastic thyroid cancer. However, dabrafenib is generally used in combination with other drugs because of the resistance of thyroid cancer cells. PI3K/AKT pathway The PI3K/AKT pathway is the second most frequently dysregulated pathway after the MAPK pathway in ATC (30). Similar to MAPK, the PI3K cascade is triggered by the RTK and RAS proteins. Once activated by NRAS, PI3K catalyzes the phosphorylation of PIP2 to PIP3. PIP3 acts as a second messenger, and its production is inhibited by PTEN. PIP3 can activate AKT, which in turn phosphorylates mTOR and a number of other targets, so that cancer develops in a direction that is more conducive to its survival. As a downstream molecule of the PI3K/AKT pathway, mTOR signaling is commonly activated in tumors and controls cancer cell metabolism by altering expression and/or activity of a number of key metabolic enzymes (31). One research showed that mTORC1 signaling affects one-carbon metabolism through an ATF4-dependent transcriptional induction of the mitochondrial tetrahydrofolate cycle (32). In addition, a previously uncharacterized protein, SAMTOR, could function as a SAM sensor linking one-carbon metabolism to mTORC1 signaling (33). Currently, related inhibitors of mTOR have been clinically tested in patients with ATC. As early as 2013, a study on the efficacy and safety of the mTOR inhibitor everolimus in the treatment of locally advanced or metastatic TC (NCT01164176) evaluated the therapeutic effect of everolimus on ATC (34). Among the six recruited TC patients, none of the ATC patients experienced remission. In a subsequent study to determine the efficacy and safety of everolimus in patients with advanced follicular TC, 7 patients with ATC were enrolled (35). However, in the entire study, no ATC patients benefited. Fortunately, in a clinical phase II study on the efficacy of everolimus in radioiodine-refractory TC, among the seven ATC patients recruited, one ATC patient (14%) exhibited PR. Two patients (28%) had a median progression-free survival (PFS) of 2.2 months (36). The study also performed genetic sequencing of six patients with ATC. Preliminary data show that patients with ATC with mTOR mutants exhibit the greatest benefit. Sapanisertib (MLN0128) is a new type of mTOR inhibitor that has previously been shown to have anti-tumor activity in other cancer patients (37). Currently, a phase II clinical trial of sapanisertib (MLN0128) for the treatment of metastatic ATC is enrolling patients (NCT02244463) and is expected to be completed before December 2022. Combined targeted drugs The diversification of tumor occurrence and development, as well as intertwined regulatory mechanisms, have brought challenges to targeted therapies. However, because of mutations in different targets in tumors and the intertwined regulatory mechanisms between signaling pathways, there may be the possibility of combined therapy between different targeted drugs. We considered BRAF inhibitors as an example. At present, TC and other cancer cells mainly prevent BRAF inhibitors from exerting their effects by reactivating the drug resistance mechanism mediated by the MAPK pathway (38). This mainly includes increasing the expression of RTK, activating mutations in upstream signals and changes in downstream MAPK pathways, activation of parallel signaling pathways, BRAF amplification, and alternative splicing. Therefore, completely blocking the MAPK pathway may be necessary to enhance the anti-tumor activity of BRAF inhibitors. In one study, the combination of the MEK inhibitor PD0325901 plus PLX4720 resulted in a better inhibitory effect on ATC cell growth than PLX4720 alone (BRAF inhibitor) (39). Moreover, one study reported that the combination of trametinib and pazopanib in anaplastic thyroid cancer cell lines resulted in synergistic inhibition of tumor growth (40). This shows that combined targeted therapy provides new possibilities for the prognosis of patients with ATC, and the current combined targeted therapy method has been used in clinical practice. Previously, a phase II clinical study evaluated the efficacy and safety of dabrafenib plus trametinib in patients with locally advanced or metastatic BRAF V600 mutant ATC (NCT02034110) (41). At the beginning of the study, 16 ATC patients were sequenced, and 15 ATC patients were found to have BRAF mutations. Preliminary results showed that 11 patients were in remission (1 CR and 10 PR). Because dabrafenib plus trametinib has strong clinical activity against BRAF V600E mutant ATC, it has been approved by the FDA for the treatment of patients with BRAFpositive ATC. In addition, another phase I study is currently underway. This study aimed to evaluate the efficacy of dabrafenib, trametinib, and intensity-modulated radiation therapy (IMRT) (NCT03975231). The study will be conducted on 6 patients and is expected to be completed in April, 2025. There have also been trials on other targeted drug combination therapies. A phase II clinical study of radioiodine-refractory TC evaluated the efficacy of the combination therapy of the mTOR inhibitor sirolimus and sorafenib (42). Among the two recruited patients with ATC, one patient had PR and did not carry the BRAF mutation. The above studies have shown that targeted therapeutic agents can achieve potential therapeutic effects in ATC. However, to date, the experimental results for most targeted therapies have been unsatisfactory, and resistance to kinase inhibitors remains a major obstacle in ATC therapy. Furthermore, several clinical trials are underway to explore the appropriate timing and sequence of targeted therapy in ATC (Table 1), and more comprehensive conclusions can be drawn regarding the effects of targeted drugs on ATC. Immunotherapy Cancer immunotherapy is cancer treatment that induces, enhances, or inhibits specific immune responses. It involves multiple immune cells, including overcoming immune suppressive signaling, T cell initiation and differentiation, and enhancing tumor-associated antigen (TAA) presentation. (Figure 2) In addition to the above-mentioned tumors that can change proliferation and utilize other methods to resist attack by the immune system, tumors can also escape immune surveillance by immune editing (43). Next, we introduce several immunotherapy strategies used for ATC, including immune checkpoints, adoptive cell therapy, and oncolytic viruses as three aspects to focus on. Immune checkpoint blockade The immune checkpoint refers to a series of immunosuppressive pathways of immune cells that regulate and control the persistence of the immune response while maintaining self tolerance (44). At present, researchers have conducted many studies on PD-1/PD-L1, but research on other immune checkpoints, such as CTLA-4 and CD27, is still rare in ATC. CTLA-4 blockade Cytotoxic T lymphocyte-associated antigen-4 (CTLA-4), also known as CD152, is a transmembrane receptor expressed on T cells. Its ligands and CD28's are B7 molecules; that is, the costimulatory molecule CD80/CD86 (also known as B7-1/B7-2) is expressed on the surface of antigen presenting cells (APCs) (45). The binding affinity of CTLA-4 to B7 is much higher than that of CD28 (46,47). When T cells are activated, CTLA-4 is upregulated and competes with CD28 to bind to B7, thereby transmitting the inhibitory signal of T cell activation and participating in the negative regulation of the immune response (45). One study found that the CD80 mRNA levels decreased in 81.82% (9/11) of ATC patients (48). Anti-CTLA-4 drugs play a role in the initiation of the immune response by inhibiting the interaction between CTLA-4 on T cells and B7 on APCs. Ipilimumab is a human monoclonal antibody, IgG1, that inhibits the interaction between CTLA-4 and its ligand. In 2011, as a result of the improvement in clinical efficacy, the FDA approved ipilimumab for the treatment of unresectable stage III/IV melanoma. The study showed that compared with placebo, the relapse-free survival (RFS), overall survival (OS), and distant metastases-free survival of melanoma patients treated with ipilimumab were significantly better (49). One study confirmed that the CTLA-4 ligand in papillary thyroid carcinoma (PTC) and ATC tissues is deregulated as PD-1, suggesting the possible prognostic value of CD80 gene expression in ATC (48). Immune-checkpoints and co-stimulatory signaling. Many of the ligands bind to multiple receptors, some of which deliver co-stimulatory signals and others deliver inhibitory signals. The expression of ligands and receptors should not be considered exhaustive; for example, PDL1/PDL2 is also expressed by antigen-presenting cells such as macrophages and dendritic cells (not shown in the figure), and MHC-II is also expressed by dendritic cells. Currently, a clinical trial of CTLA-4 antagonists combined with PD-1 antagonists for ATC is being conducted (NCT03246958). PD-1/PD-L1 blockade PD-1 (programmed death receptor 1) is an inhibitory receptor expressed on a variety of immune cells including T cells, B cells, dendritic cells (DCs), monocytes, and natural killer cell (NK) receptors (50). The interaction between PD-1 and its ligand PD-L1 or PD-L2 leads to the downregulation of effector T cell responses and mediates immune tolerance, resulting in the immune escape of tumor cells. PD-L1 is overexpressed in many types of tumors and is associated with poor prognosis. Many studies have found that PD-L1 is highly expressed in ATC tissues and can promote tumor cells (48,(51)(52)(53)(54)(55)(56). Lymphocyte infiltration in the ATC group was significantly higher than that in the differentiated thyroid carcinoma (DTC) group, and PD-L1-or PD-1-positive lymphocytes were significantly higher than those in the DTC group (57). High PD-1/PD-L1 expression predicts poor prognosis in patients with ATC in terms of OS and progression-free survival (PFS) (53). This indicated that the PD-1/PD-L1 pathway plays a key role in ATC. Currently, research on PD-1/PD-L1 inhibitors has been successful. Spartalizumab (PDR001) is a humanized monoclonal antibody that targets PD-1 on the surface of human immune cells with immune checkpoint inhibition and anti-tumor activity (58). In a phase II clinical trial (59), patients with locally advanced and/or metastatic ATC were intravenously injected with 400 mg of spartalizumab every 4 weeks. The overall remission rate (ORR) was 19%, including 3 patients with complete remission and 5 patients with partial remission, which confirmed the efficacy of spartalizumab (PDR001), a PD-1 inhibitor, in the treatment of ATC. It can be seen that spartalizumab has good clinical activity and safety in patients with malignant incurable diseases and short life span. In addition, a single-arm and multi-center study using PD-1 monoclonal antibody HX008 as the drug has not yet begun to be conducted, and the purpose is to evaluate the efficacy and safety of HX008 injection in patients with metastatic or locally advanced ATC (NCT04574817). Meanwhile, combination therapy with anti-PD-1 antibody and CTLA-4 in patients with stage II TC is currently ongoing (NCT03246958). To test the effectiveness of PD-L1 blocking combined therapy, other clinical trials are ongoing (NCT03181100, NCT03122496, and NCT04400474). These studies suggest that blocking the PD-1/PD-L1 pathway using checkpoint blockers may be an effective treatment. All clinical trials regarding the PD-1/PD-L1 axis block for ATC are listed in Tables 2, 3. In some studies, checkpoint inhibitors were discontinued because of toxicity, but their overall tolerance was superior to that of chemotherapy. Immune-related adverse events are usually caused by autoimmune inflammation in various organs because of the excessive activation of T cells. Kolllipara et al. (60) described the abnormal reactions of BRAF-positive patients treated with PD-1 inhibitors (nivolumab) who developed nausea, vomiting, and diarrhea during the 12th cycle of nivolumab administration and were diagnosed with acute colitis by colonoscopy. Similarly, another study also described the adverse manifestations of rapid and intense response to pembrolizumab monotherapy in ATC patients (61). These results reflect the potential and serious adverse reactions associated with PD-1/PD-L1 inhibitors. In one case report (62), two patients with ATC received anti-PD-1 drug treatment, of which one had poor efficacy. This also reflects that, in some cases, ATC patients have a poor response to PD-1/PD-L1 inhibitors, and the response rate to different drugs varies greatly. Therefore, the toxicity of inhibitors and poor responses of some patients are still issues that need to be overcome. It is worth noting that the success of anti-PD-1/PD-L1 therapy depends not only on positive PD-L1 expression but also on CD8 + tumor-infiltrating lymphocyte density and the ability of CD8 + T cells to recognize tumor antigens (55,63). This suggests a new approach to addressing the unsatisfactory effects of anti-PD-1/PD-L1 therapy. Other immune checkpoint blockades In addition, the abnormal expression of immune checkpoint molecules such as CD27, CD47, and CD70 has also been reported in ATC tissues (52,64). Thus, these molecules may be potential targets for ATC immunotherapy. CD27 is a member of the tumor necrosis factor (TNF) receptor family, and its ligand CD70 is a member of the tumor necrosis factor (TNF) superfamily (65). CD27 is constitutively expressed in T lymphocytes, B lymphocytes, and NK cells (66). In a study by Karen et al. (52), the expression of CD70 in 49 ATC cases was analyzed. The results showed that CD70 expression was upregulated in 49% of the samples and was diffusely expressed in 41.7% of the samples. They also noticed that CD27 expression was weak and focal in all three specimens. All ATC samples expressed CD27 in the surrounding lymphocyte subsets and were infiltrated by the tumor. These data indicate that CD27-CD70 in ATC mainly occurs in CD27 + lymphocytes that are in contact with CD70 + tumor cells. In summary, this report demonstrated that CD70 expression exists in a large number of ATC samples. CD70 can be used as an anti-tumor target for immunotherapy. Since lymphocytic infiltration of tumors is generally low, further studies are needed to determine the most effective therapy for patients with ATC. CD47 is a quantic transmembrane receptor that inhibits phagocytosis via its anti-receptor signaling regulator protein a(SIRPa) (64). In another study (67), Christian et al. analyzed the expression of CD47 in 19 human primary ATC tissues. The results showed that TAMs heavily infiltrated human ATC samples, and CD47 and calreticulin were also expressed. Blocking CD47 promotes macrophage phagocytosis in ATC cell lines and inhibits tumor growth in vitro. To verify the validity of the in vitro phagocytosis experiment, anti-CD47 antibodies were used to treat immunized ATC cell line xenograft mice and tamoxifen-induced ATC double-transgenic mice. Experiments in mice showed that the treatment of ATC xenograft mice with anti-CD47 antibody increased TAM frequencies, enhanced expression of macrophage activation markers, enhanced tumor cell phagocytosis, and inhibited tumor growth. Blocking CD47 in tumor cells expressing CD47 increased TAM frequencies in double-transgenic ATC mice. These results suggest that blocking CD47 as a target could potentially improve the prognosis of patients with ATC and may be a valuable supplement to the current treatment standards. Adoptive cell transfer and CAR-T cell therapy Adoptive cell therapy is another type of immunotherapy that relies on the active, sufficient recruitment of anti-tumor T cells in the body (68). Two methods have been used: one is to inject patients with natural host cells that have been expanded in vitro and the other is to use chimeric antigen receptor T (CAR-T) cells to specifically recognize and kill tumor cells to treat different malignant tumors (69, 70). Several studies have confirmed that both NK cells and CAR-T cells can effectively kill ATC cells (71,72). NK cells are important effectors of innate immunity and play an important role in maintaining homeostasis by producing cytokines and exhibiting effective cytotoxic activity. In addition, NK cells play a key role in adaptive immune mechanisms (73). Low levels of NK cells have been reported in thyroid tumors (74). In one study (72), a retrovirus was used to transfer the Effluc gene into human NK cells (NK-92 mi), and human ATC cells (CAL-62) were transferred with Effluc and Rluc genes. ATC lung metastasis was observed by intravenous injection of CAL-62 in nude mice with lung metastasis or xenografts. Five million NK-92MI cells were injected twice into the caudal vein of nude mice. It was observed that NK cells can significantly inhibit the growth of metastatic tumors. This suggests that NK cell-based immunotherapy may be an effective treatment for ATC lung metastases. T cells in CAR-T cell therapy have been genetically modified to express transmembrane proteins, which are synthetic T-cell receptors and antigens that target predefined tumor expression. Adoptive CAR-T cell therapy has achieved very good results in hematological malignancies (75). Research on CAR-T cells in nonhematological solid tumors is ongoing. In a breakthrough animal experiment (15), it was found that ICAM-1 CAR-T cells that target intercellular adhesion molecule-1 can mediate strong and lasting anti-tumor activity, leading to tumor eradication and a significant increase in the long-term survival of ATC xenograft mouse models. Although the expression level of ICAM-1 in some ATC cells varies, CAR-T cells can induce an increase in ICAM-1 expression so that all cells become targetable cells. ICAM-1 CAR-T cells have been used in clinical trials of patients with ATC (NCT04420754). NK cell-based immunotherapy may be an effective treatment for ATC lung metastases. Treatment with ICAM-1 CAR-T cells has been theoretically supported, and relevant clinical experiments have also been conducted. Although both adoptive cell therapy methods are promising for ATC, their clinical significance needs to be further explored. Oncolytic virus therapy Oncolytic viruses are non-pathogenic viruses that specifically infect cancer cells. These natural or genetically engineered viruses are less toxic to normal cells but can kill cancer cells, and the release of tumor antigens by dissolution and destruction of cancer cells can stimulate the immune system and enhance immune function (76). Some studies have identified several oncolytic viruses that can effectively inhibit the proliferation of ATC cells, including dl922-947, poxviruses, and Newcastle disease virus (77)(78)(79). According to a previous study (77), dl922-947 impairs ATC-induced in vitro angiogenesis and monocyte chemotaxis by reducing IL-8 and CCL2 levels. In an in vivo ATC model, dl922-947 treatment reduced angiogenesis and TAM density. The vaccinia virus is an effective poxvirus that can effectively control proliferation and induce cell death in ATC cell lines (78). In addition, the oncolytic Newcastle disease virus (NDV) has shown potential to induce tumor cell death in a variety of cancer cells from different sources. Jiang et al. (79) found that the recombinant reporter virus rFMW/GFP showed oncolytic activity in ATC cells through the p38 MAPK signaling pathway and represented a new potential ATC treatment strategy. Although there are no clinical studies, oncolytic virus therapy is a promising treatment for ATC. Inhibiting recruitment of tumorassociated macrophages (TAMs) TAMs are mature M2-polarized macrophages derived from blood monocytes and recruited by molecules produced by tumor cells and stromal cells at tumor sites. Increased TAM density has been reported to be associated with a decreased survival rate in TC patients (80). Caillou et al. (81) showed that a large number of TAMs are present in most ATCs. CSF-1 and CCL-2 have chemotactic effects on TAMS; therefore, blocking and targeting the CCL-2/CCR2 and CSF-1/CSF-1R pathways is a promising approach (82). Treatment involving TAMs has been extensively performed in other tumor types, but studies on ATC are relatively lacking. Novel therapeutics are expected to offer better long-term survival for these patients. Combining targeted therapy with chemotherapy or radiotherapy Recent studies have shown that the combination of targeted drugs and chemotherapy exhibits a synergistic anti-tumor effect (83). In one study, researchers treated ATC cells with lenvatinib and paclitaxel separately or in combination for 72 h and found that lenvatinib enhanced the cell cycle arrest and apoptosis effects of paclitaxel in ATC cells (83). In in vitro experiments, researchers established ATC tumor xenografts in nude mice and treated these mice with lenvatinib and paclitaxel, respectively, or in combination. The results showed that the combined treatment had a more significant effect on reducing tumor weight, and lenvatinib enhanced the anti-tumor effect of paclitaxel in ATC (84). In view of this theoretical basis, there are new strategies for targeted drugs with poor monotherapeutic effects. A multicenter, open-label, nonrandomized, phase II trial established the safety and tolerability of efatutazone and paclitaxel in anaplastic thyroid cancer, when efatutazone (0.15, 0.3, or 0.5 mg) was administered orally twice daily and then paclitaxel every 3 weeks, respectively (85). Fifteen patients with ATC were enrolled in the study. The median PFS was 48 and 68 days in the 0.15 and 0.3 mg treatment group, respectively. Another open-label, randomized, multicenter study evaluated the safety and efficacy of carboplatin/paclitaxel (CP) with or without fosbretabulin in ATC and concluded that the addition of fosbretabulin to CP did not significantly improve OS (86). In addition, a trial evaluating the efficacy and tolerability of the antifolate agent pemetrexed and the chemotherapy drug paclitaxel in patients with recurrent/advanced follicular, papillary, or anaplastic thyroid cancer was conducted on November 6, 2008, but no results have been released so far (NCT00786552). Currently, there is a study on the treatment of ATC patients with intensitymodulated radiotherapy and paclitaxel, with or without pazopanib (NCT01236547). The preliminary results of this clinical trial show that targeted therapy combined with radiotherapy and chemotherapy does not show a clear therapeutic effect, but the additive effect of its toxicity is not obvious. In summary, targeted therapy combined with chemotherapy has shown positive progress in ATC treatment of anaplastic thyroid cancer. Combining immunotherapy with chemotherapy or radiotherapy Some clinical trials have investigated combinations of chemotherapy, radiation, and immunotherapies, especially in combination with immune checkpoint inhibitors, for treatment of patients. Three patients with unresectable tumors were recruited for a phase II study of pembrolizumab combined with ipilimumab, docetaxel, or doxorubicin and volumetric modulated arc therapy (VMAT) as the initial treatment for anaplastic thyroid cancer (87). They had received the study drug (pembrolizumab, 200 mg intravenously) >3 days prior to chemoradiotherapy and then every 3 weeks thereafter until progressive disease, intolerance, or withdrawal of consent. Chemoradiotherapy with docetaxel (20 mg/ m 2 ) and doxorubicin (20 mg/m 2 ) would typically start within 2-4 weeks of surgical resection, as deemed appropriate by treatment providers. For radiation, in the primary setting, VMAT consisted of 66 Gy administered in 33 fractions over 6.5 weeks to all gross diseases in the neck. Early tumor responses were favorable in all three patients, and all three were satisfactorily completed: intended radiotherapy, preceding and radiotherapy-concurrent pembrolizumab, and concurrent chemoradiotherapy. However, all three patients died <6 months (OS) following therapy initiation, prompting study closure. Simultaneously, a phase II trial of atezolizumab in combination with chemotherapy is ongoing (NCT03181100). A trial to test the safety of durvalumab (MEDI4736) and remelimumab in combination with radiotherapy is ongoing (NCT03122496). Few trials have combined immunotherapy with chemoradiotherapy. Although the results of one trial were initially tolerable and effective in terms of disease control in local areas, the disappointing survival results increased the uncertainty about pilot approaches in ATC that merit further pursuit. Combining immunotherapy with targeted therapy At present, it has been found that the combined treatment of immunotherapy drugs and targeted therapy drugs may strengthen the antitumor effect of targeted therapy drugs, showing the prospect of combined treatment in ATC. For patients with ATC, pembrolizumab combined with lenvatinib can effectively delay the progression of the disease, with a median PFS of 16.5 months, but half of the patients experienced adverse effects (88). One study showed that the addition of pembrolizumab in the early stage of progression or KI treatment could also enhance the efficacy of kinase inhibitors to a certain extent (89). There are ongoing experimental studies on this topic. There is a study of whether the standard treatment with dabrafenib and trametinib with the addition of cemiplimab can be an effective method for the treatment of ATC (NCT04238624). To evaluate the combined efficacy of lenvatinib and pembrolizumab in the treatment of ATC, a clinical trial has established a safe and effective treatment for metastatic ATC (NCT04731740). A phase II experiment was conducted to study the effects of pembrolizumab, dabrafenib, and trametinib on the preoperative efficacy of BRAF V600E mutant ATC (NCT04675710). In 2017, Kollipara et al. reported an encouraging case of a 62-year-old man diagnosed with ATC (60). Initially, the patient underwent thyroidectomy and lymph node dissection, followed by chemotherapy. Nextgeneration sequencing was performed to guide treatment. BRAF and PD-L1 were found to be positive in the tumors, and the patient was treated with vemurafenib (BRAF inhibitor) and nivolumab (human IgG4 anti-monoclonal PD-1 antibody). After 20 months of treatment with nivolumab, metastatic lesions continued to decrease, with complete radiological and clinical remission. In terms of oncolytic therapy combined with targeted therapy, researchers have made progress in animal experiments. In 2015, Passaro et al. confirmed that PARP inhibition increases dl922-947 replication and oncolytic activity in vitro and in vivo (90). In 2020, Crespo-Rodriguez et al. found that the oncolytic herpes simplex virus (oHSV) in combination with BRAF inhibitors significantly improved survival in ATC mouse models by enhancing immunemediated antitumor effects, and the combination of PD-1 or CTLA-4 blockade further improved therapeutic efficacy (91). The use of targeted drugs to enhance the cytocidal activity of NK cells against tumor cells in vivo or in vitro is also an important research direction. Because indoleamine-2,3-dioxygenase (IDO) can induce the production of kynurenine and reduce the expression of NKG2D and NKP46 receptors in NK cells, the function of NK cells in patients with TC is reduced (92). In 2018, a study showed that prostaglandin E2 (PGE2) produced by TC inhibits the cytolytic activity of NK cells, and ATC cells release more PGE2 than PTC cells (93). Therefore, IDO and PGE2 may be effective targets for enhancing NK cell activity in ATC tissues. Furthermore, IDO1 could play an important role beyond immune regulation, with the potential to influence one-carbon metabolism in cancer cells (94). In these experimental groups, immunotherapy combined with targeted therapy was more effective than either immunotherapy alone or targeted therapy. However, there are few clinical experiments in this regard, and further research is needed (Table 3). Conclusion ATC is a rare and aggressive thyroid cancer that belongs to the thyroid cancer tissue type and has the worst prognosis. Traditional treatments include surgery, radiotherapy, and conventional chemotherapies. However, these treatments are insufficient. Targeted therapy has long been used as a treatment for ATC, while immunotherapy is still in the experimental stage; however, numerous experiments have shown that it can be used as a potential therapy. Studies have found that targeted therapy is beneficial in improving the therapeutic effect and quality of life of patients, but has greater adverse effects. This can be overcome by lowering the dose and using medically assisted therapy for the treatment of adverse symptoms. At the same time, owing to the development of the tumor escape mechanism and the temporary effect, there is rapid development of acquired drug resistance. The response is not durable, and it is necessary to conduct in-depth research on the mechanism of tumor resistance, analyze gene mutations again, and accurately target the treatment site. Currently, one-carbon metabolism has been found to contribute to a variety of downstream pathways known or potentially beneficial for cancer cell survival, and a detailed understanding of it may allow more precise targeting of specific pathways most important for cancer cell survival (95). In immunotherapy, the presence of TAMs, NK cells, and other tumor-infiltrating lymphocytes (TILs) in ATC tissues highlights the correlation of tumor-immune cell interaction (74). Many clinical trials have been conducted on the PD-1 and PD-L1 pathways, which have demonstrated the development of this pathway. Adoptive cell therapy has shown good results in preclinical mouse models of ATC lung metastasis. In addition, oncolytic virus strategies have been shown to play an anti-tumor role through a dual mechanism of selective killing of tumor cells and induction of systemic anti-tumor immunity. Despite major breakthroughs that have been made in clinical practice, immunotherapy remains ineffective for most patients. A variety of unpredictable and carefully managed toxic effects adds to the difficulty of treatment. Moreover, as the effect of monotherapy for ATC was not satisfactory, researchers considered a multi-drug combination therapy strategy. Toxicity is the biggest limitation of combination immunotherapy, and toxicity is more serious when combining PD-1/PD-L1-targeted drugs and CTLA-4 inhibitory mAb. In addition to toxicity, the appropriate treatment sequence and timing should be considered and selected when designing combination regimens. Further exploration of new drugs and innovative combination strategies are needed to minimize the toxicity of targeted therapies combined with immunotherapy. Predictive biomarkers are urgently needed to guide precise immunotherapy and to explore new combination therapy strategies to harness the immune system to enhance anti-tumor efficacy and deliver optimal treatment to each patient. In general, effective treatments for ATC are limited, and there is an urgent need to explore new treatments. Although immunotherapy has not yet been approved for ATC, it has been shown to be effective in some malignancies, such as melanoma, non-small cell lung cancer, and leukemia. Compared to targeted therapy, there are fewer studies on immunotherapy, but its successful results should not be ignored. With the development of immunotherapy, it can be hoped and even expected that novel therapeutics arise offering longer term survival for ATC patients. As a promising treatment for ATC, immunotherapy and its combination with targeted therapy should be the focus of future studies. Author contributions XG completed the writing. CH was involved in the design of the manuscripts. YX completed the documentation and figure drawing. XZ were responsible for the revision of the manuscripts. All authors contributed to the article and approved the submitted version. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
v3-fos-license
2018-07-11T13:20:50.413Z
2018-07-09T00:00:00.000
51619097
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-018-2946-x", "pdf_hash": "b31085c80c33e6cffd1de8b9a6b5759ddb7a6f0b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1366", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "e75ce00a3630cd74e6b3e99a17191c5d0615dd88", "year": 2018 }
pes2o/s2orc
Temperature-dependent development and freezing survival of protostrongylid nematodes of Arctic ungulates: implications for transmission Background Umingmakstrongylus pallikuukensis and Varestrongylus eleguneniensis are two potentially pathogenic lungworms of caribou and muskoxen in the Canadian Arctic. These parasites are currently undergoing northward range expansion at differential rates. It is hypothesized that their invasion and spread to the Canadian Arctic Archipelago are in part driven by climate warming. However, very little is known regarding their physiological ecology, limiting our ability to parameterize ecological models to test these hypotheses and make meaningful predictions. In this study, the developmental parameters of V. eleguneniensis inside a gastropod intermediate host were determined and freezing survival of U. pallikuukensis and V. eleguneniensis were compared. Methods Slug intermediate hosts, Deroceras laeve, were collected from their natural habitat and experimentally infected with first-stage larvae (L1) of V. eleguneniensis. Development of L1 to third-stage larvae (L3) in D. laeve was studied at constant temperature treatments from 8.5 to 24 °C. To determine freezing survival, freshly collected L1 of both parasite species were held in water at subzero temperatures from -10 to -80 °C, and the number of L1 surviving were counted at 2, 7, 30, 90 and 180 days. Results The lower threshold temperature (T0) below which the larvae of V. eleguneniensis did not develop into L3 was 9.54 °C and the degree-days required for development (DD) was 171.25. Both U. pallikuukensis and V. eleguneniensis showed remarkable freeze tolerance: more than 80% of L1 survived across all temperatures and durations. Larval survival decreased with freezing duration but did not differ between the two species. Conclusion Both U. pallikuukensis and V. eleguneniensis have high freezing survival that allows them to survive severe Arctic winters. The higher T0 and DD of V. eleguneniensis compared to U. pallikuukensis may contribute to the comparatively slower range expansion of the former. Our study advances knowledge of Arctic parasitology and provides ecological and physiological data that can be useful for parameterizing ecological models. Background The extreme climate of the Arctic makes it one of the most challenging environments for living organisms. Winter temperatures can drop to -50°C in many parts of the Arctic, and sub-zero temperatures last for nearly two-thirds of the year. The summers are short, cool, and dry, providing a narrow developmental window for ectotherms, including parasites, and a short growing season for endotherms and plants [1][2][3]. Despite these adversities, a diverse group of flora and fauna, ranging from large mammals to microscopic parasites, constitutes Arctic biodiversity [4][5][6]. Studies have shown that these organisms develop unique physiological and behavioural strategies to cope the extremes [7,8]. In the animal kingdom, nematodes are considered the most diverse and successful organisms for their ability to adapt to diverse habitats and survive extreme environmental conditions [9,10]. Nematodes are often used as sentinels of climate-change impacts because larval development inside the intermediate host is temperature-driven and free-living larval stages are influenced by the external environment [5,[11][12][13]. For instance, lungworm-ungulate systems in the Canadian Arctic have become key in the understanding of the impacts of climate warming on host-parasite systems [5,14,15]. In order to determine the current and future impacts of climate change on disease dynamics and ecosystem health, laboratory-and field-based experiments can shed light on the temperature-dependent biology and ecology of both nematode parasites and host species [16][17][18]. Two lung nematodes, Umingmakstrongylus pallikuukensis Hoberg, Polley, Gunn & Nishi, 1995 and Varestrongylus eleguneniensis Verocai, Kutz, Simard & Hoberg, 2014, are parasites of Arctic ungulates in areas of the Canadian Arctic mainland and Victoria Island in the Arctic Archipelago [19][20][21]. Umingmakstrongylus pallikuukensis is a host specialist and only infects muskoxen (Ovibos moschatus) [22,23], whereas V. eleguneniensis is a multi-host parasite that infects muskoxen, caribou (Rangifer tarandus) and moose (Alces alces) [20,21,24]. The life-cycle of both parasites is indirect and involves a gastropod intermediate host. In the intermediate host, the first-stage larva (L1) develops into a third-stage larva (L3), and the development process is dependent upon temperature. Below a specific temperature (T 0 ; lower threshold temperature), larval development inside the intermediate host is minimal, and development to L3 does not take place [25,26]. Above the T 0, L1 develop to L3 after accumulating a certain amount of heat, quantified as development degree-days (DD) [25]. The L3 are transmitted when the definitive ungulate host accidentally ingests while grazing a gastropod containing L3 or L3 that have emerged from the intermediate host and are free-living in the environment [27]. The L3 emergence is an adaptation, particularly common in for northern protostrongylids, that may enable the availability of infective L3 in the environment throughout the winter when the gastropods are unavailable [26,27]. The developing larvae inside the intermediate hosts are protected from temperature extremes by the thermoregulatory behaviour and, presumably, the freeze tolerance physiology of the gastropods [1,28]. However, the L1 and emerged L3 are under direct exposure to the external environment and can be exposed to prolonged and intense sub-zero temperatures. Umingmakstrongylus pallikuukensis and V. eleguneniensis co-occur in muskoxen of the western Canadian Arctic and, until recently, were limited to the North American mainland and had not been found in the Arctic Archipelago. However, in 2008, U. pallikuukensis, and in 2010, V. eleguneniensis, were reported for the first time on southern Victoria Island, Nunavut, Canada [19] ( Fig. 1). Climate warming, with the consequent alteration of a previously unsuitable environment to one that is permissive for development and transmission of these parasites, is suggested as the driver for the invasion and establishment on Victoria Island [15,19]. Since their discovery on the island, both parasites have rapidly expanded their ranges northward, but at different rates, with U. pallikuukensis establishing at higher latitudes prior to V. eleguneniensis (Kafle, Kutz unpublished data). One hypothesis for the differential range expansion of the two parasites is different species-specific thermal requirements and tolerances for the development and survival of their larval stages. Indeed, understanding the developmental responses to temperature in intermediate hosts and freeze tolerance of L1 in the environment is vital for understanding the ecology and transmission dynamics of protostrongylids in general. Kutz et al. [25] studied the temperature-dependent development of U. pallikuukensis in its intermediate host, but similar information for V. eleguneniensis is lacking. Few previous studies have investigated the freezing survival of L1 of protostrongylids [29][30][31][32], and none have investigated the short-term or long-term survival of L1 at extreme sub-zero temperatures. The objectives of this study were to determine the temperature requirements and tolerances of V. eleguneniensis and U. pallikuukensis. Specifically, we investigated the temperature-dependent development of V. eleguneniensis in a gastropod intermediate host, the meadow slug Deroceras laeve (O. F. Müller, 1774), and compared the freezing survival of L1 of U. pallikuukensis and V. eleguneniensis. This study is essential to our understanding of the thermal ecology of these two emerging parasites in the Canadian Arctic. The resulting data on thermal tolerances provide essential parameter estimates for parasite distribution and transmission models [33] and will ultimately advance our knowledge on Arctic parasitology and parasite invasion. Methods Two sets of experiments were carried out to determine the impact of temperature on development and survival of U. pallikuukensis and V. eleguneniensis. First, temperature-dependent development of V. eleguneniensis in D. laeve was determined following the methodology used for U. pallikuukensis by Kutz et al. [25]. Secondly, the freezing survival of both species at various sub-zero temperatures was evaluated. [34,35]. The slugs were identified to species by external morphology [36] and transported to the laboratory, where they were stored and maintained in a perforated Rubbermaid® (Rubbermaid, Atlanta, USA) container [25] at 8 ± 2°C with 12 h light. All the slugs were collected during late spring/early summer and infected within two weeks of collection. The slugs that appeared old and less active were excluded from the the experiments. The infected slugs weighed 104 ± 40 mg (mean ± SD). First-stage larvae of V. eleguneniensis were isolated from the feces obtained from wild muskoxen in northern Quebec (58.75°N, 68.55°W) using the modified beaker Baermann technique [37]. The fecal samples had been frozen at -20°C for over five years and repeatedly confirmed to have only V. eleguneniensis L1 [20,38]. The species identity was reconfirmed by morphology [39] and PCR. L1 were collected in a Falcon tube® (Eppendorf, Hamburg, Germany) stored at 4°C (12-18 h) before the infection. Slug infection with V. eleguneniensis L1 The experimental infection was performed as previously described by Hobert et al. [22] and Kutz et al. [25], with some modifications. Briefly, foot lesions of the wild-caught slugs were checked under dissecting microscope to ensure that slugs were not infected with other protostrongylids [27]. For each of five temperature trials, 35 slugs (40 at 8.5°C) were used. Slugs were infected in groups of five in a medium-sized (9.1 cm) Petri dish (VWR, Canada). First, the Petri dishes were lined with Whatman®#3 filter paper (GE Healthcare Life Sciences, UK), moistened with clean tap water and 1000 (or 1500 for the 20°C trial) motile L1 (estimated from aliquot counts, in 2 ml tap water) were spread evenly on the filter paper. The slugs were then placed on the Petri dish to start the infection. Contact between slugs and L1 was ensured using plastic tweezers to gently move the slugs that had crawled on the sides or lids of the Petri dish back to the center of the Petri dish every 15 min for three hours. All infections were performed at room temperature (20 ± 1°C) for three hours (14:00-17:00 h). For each trial, the dishes were then transferred to the respective temperature-controlled incubators, and infections were continued overnight. At 9:00 h the next morning, the slugs were transferred to a new Petri dish lined with moistened filter paper (except for 8.5°C trial where slugs were moved to a single larger Rubbermaid® (Rubbermaid, USA) container), and food (clean lettuce, carrot and a piece of chalk) was provided. Slugs were then kept in darkness for the remainder of the experiments, and the temperature was monitored every 15 min (or every one 1 hour in the 8.5°C trial) using Log-Tag® temperature recorder (LogTag recorders, NZ). Slug digestion and larval examination For each trial, three slugs were haphazardly selected at designated days post infection and digested in a pepsin hydrochloride solution [22,25]. The digestion days were chosen based on previous trials for V. eleguneniensis [38], and known development rates for related protostrongylids [25,26,38]. The goal was to determine what day the first intermediate third-stage larvae (iL3), defined as a motile larva with fully developed intestinal cells [25], was present. Slug digestions were started at least five days in advance of when the earliest L3 were expected, and the first iL3 were detected at least three days after digestions began. Larvae isolated from the digests were examined under 400× magnification (Olympus CKX41, Olympus, Tokyo, Japan) and the developmental stage was determined [25]. The day when the first iL3 was detected in at least one of the digested slugs was the endpoint for determining development rate, as this was the endpoint used in previous studies on other species [25,26]. After detecting iL3, the remaining slugs were digested, and L3 quantified, except in the trial at 12.5°C, where six slugs were separated into individual dishes to do a pilot study on L3 emergence. Freezing survival of U. pallikuukensis and V. eleguneniensis L1 Sources of L1 Varestrongylus eleguneniensis L1 were obtained from the fresh feces of a captive muskox that was experimentally infected with the larvae obtained from wild muskoxen from northern Quebec (58.75°N, 68.55°W). Umingmakstrongylus pallikuukensis L1 were obtained from fresh fecal samples collected from wild muskoxen near Norman Wells, Northwest Territories (63.35°N, 126.52°W). Within 24 h of collection, fecal samples were transported to the lab in whirl packs (Nasco Whirl-Pak, Nasco, Ontario, Canada) maintained at 4 ± 1°C (temperature monitored by LogTag® temperature recorder) and stored at 4 ± 1.5°C until processed. For both parasite species, L1 were extracted using the Baermann method [37] within one week of collection. The species' identities were confirmed morphologically following the guides by Kafle et al. [21,39]. Experimental design and larval observation The freezing survival of U. pallikuukensis and V. eleguneniensis was studied under four (-10, -25, -40 and -80°C ) and three (-10, -25 and -40°C) sub-zero temperatures, respectively (Fig. 2). Each temperature treatment comprised 15 or 20 ELISA plates (Eppendorf ) for each species (Fig. 2), with each of 40 wells in an ELISA plate containing one to ten individual parasites suspended in 200 μl of tap water. Survival to 2, 7, 30, 90 and 180 days post-freezing was evaluated for both parasites (Fig. 2). Before subjecting to freezing, each well (labelled with a unique identification number) was observed under 400× magnification, the species identity was reconfirmed morphologically, and the number of the L1 present in each well was recorded. Only live L1 (motile larva) were considered for the experiment. Wells that had over 10 initial individuals were not included in the study because it was difficult to accurately assess the survival with such high density of L1. The ELISA plates for both species were placed in a Rubbermaid container in temperature-controlled freezers. The temperature inside each container was monitored every 15 minutes using LogTag recorder. On the day of observation, four plates of each species were selected randomly (three plates at -25°C because of a shortage of L1), left to thaw at room temperature on the lab bench for one h, and then observed under 200× magnification. Any larvae that were deformed were considered not viable and thus recorded as dead. Larvae that did not show any motility within two minutes of observation were kept at 4°C for another 24 h, and if they had not regained motility after 24 h, they were recorded as dead. Data analysis All analyses were conducted using R statistical software [40]. The thermal parameters for larval development inside the intermediate host, i.e. the threshold temperature, temperature of theoretically zero development (T 0 ), and degree-days for development (DD), were estimated by fitting the daily development rate 1 D to a simple linear equation for each temperature: where D is the number of days from infection to the first appearance of intermediate third-stage larvae (iL3) and T is temperature. The parameters were estimated by linear regression of development rate (1/D) over temperature (T), where slope equals 1/DD and the intercept equals T 0 /DD [41,42]. We fitted a linear model for our parameter estimates for three reasons: (i) linear model fitted our data well, (ii) our objective was to derive DD and T 0 , and (iii) we wanted to compare to other protostrongylids, especially with lungworm U. pallikuukensis, and linear models were used to derive the parameters for these protostrongylids. Binomial generalized linear models (GLM; logit link) were fitted to investigate the effect of temperature and freezing duration on the survival of U. pallikuukensis and V. eleguneniensis. The response was the proportion of individuals in each well surviving until inspection at 2, 7, 30, 90 or 180 days freezing duration, calculated as the surviving L1s divided by the initial number of viable parasites in each well. Fixed effects included parasite species, temperature, freezing duration, and the interaction between temperature and species, representing the differential effect of freezing temperature on the two species. The plate was initially included as a random effect, but the variance among the plates was small, and the inclusion of the random effect did not change the parameter estimates for the fixed effects, so the effect of the plates was ignored. Ten different models were fitted representing different combinations of the four fixed effects. The models were compared using Akaike Information Criterion (AIC). There was a high degree of model and parameter uncertainty, and so we report the model-averaged predictions for survival over the top five models which comprised more than 90% of the cumulative Akaike weight [43], using the AICcmodavg library [44]. Results Temperature-dependent development of V. eleguneniensis Varestrongylus eleguneniensis larvae successfully developed from L1 to L3 at temperatures between 12.5 to 24°C (Table 1; Figs. 3 and 4. At 8.5°C, L1 developed to L2 by 50 days post-infection (dpi), but no L3 were observed on weekly slug digestion by day 101. After day 101, the sampling interval was changed to lengthen the trial, and the remaining slugs were monitored, fed regularly, and the slugs that died were digested. The last slug was digested at 166 days and no L3 were detected. Development occurred faster at higher temperatures (Table 1, Fig. 4). Development rate increased significantly with temperature according to the equation (e.g., 1/dpi = -0.557 + 0.0058 T (F (1, 2) = 1297, P < 0.0001), with R 2 of 0.99. From equation 1, the threshold temperature was determined as T 0 = 9.54°C (95% CI: 8.25-10.57°C; based on 95% CI on model predictions in Fig. 2) and DD was 171.25 (95% CI: 153-194), which are within the range determined for other northern protostrongylids (Table 2). In the pilot study on larval emergence, L3 emerged from four of the six slugs maintained individually at 12.5°C from day 62 to day 87. Two slugs died at day 68, and no L3 had emerged from these slugs up to that point. For the remaining four slugs, larval emergence was first observed at 70 dpi (two slugs) and 74 dpi (two slugs). Emergence from all four slugs continued to day 83, and although no emergence was detected on subsequent observations, L3 were found inside all of the slugs on digestion after they died at days 85 and 87, respectively. Freezing survival of U. pallikuukensis and V. eleguneniensis L1 L1 of both U. pallikuukensis and V. eleguneniensis had high freezing survival at all temperatures and all durations (Fig. 5). There was a high degree of model [20]. Scale-bars: 20 μm uncertainty, with five of the ten models tested making up 90% of the cumulative Akaike weight (Table 3). Temperature and freezing duration were fixed effects in all five top models and, therefore, were more important drivers of survival than species (Table 3). Survival decreased with increasing freezing duration and decreasing temperature in the top five models (Table 4). Despite three of the top models including an effect of species (Table 2), within each model, the parameter estimate(s) for species were not significantly different from zero (Table 4), suggesting the effect of species was weak and freezing survival of U. pallikuukensis and V. eleguneniensis L1 was similar (Fig. 6). The model-averaged predicted time for 50% mortality of L1 when kept at -25°C was 653 days (95% CI: 618-677) for U. pallikuukensis and 668 days (95% CI: 631-695) for V. eleguneniensis. Discussion Typical to any protostrongylid, the rate of larval development for V. eleguneniensis inside the intermediate host, D. laeve, was positively related to temperature, and the lower threshold temperature (T 0 ) and the thermal constant required for development into infective stage (DD) were within the range determined for other northern protostrongylids (Table 2). We found that D. laeve is a good intermediate host for V. eleguneniensis, and the observed larval establishment and L3 recovery were substantially better than observed in our laboratory for the common garden slug, D. reticulatum [38]. On Victoria Island, where the range of V. eleguneniensis is rapidly expanding, D. laeve is the only suitable slug intermediate host that has been documented [35], and a high rate of larval uptake may be important for successful transmission. The evidence of larval emergence, albeit in a pilot study, supports the possibility of an ecological adaptation by V. eleguneniensis to allow overwinter transmission, as described for U. pallikuukensis and, to a lesser extent, for P. odocoilei [26,27]. The high freezing survival of both U. pallikuukensis and V. eleguneniensis L1 not only provides new information on the cold hardiness of these parasites, but quantifies these parameters for use in parasite transmission models. The rate of larval survival as a function of freezing temperature and duration is also important for designing cryopreservation strategies and estimating the larval survival in frozen samples. The similar ability of U. pallikuukensis and V. eleguneniensis to survive freezing for the extended period is intriguing, as U. pallikuukensis being a relatively specialized Arctic parasite, was expected to survive better than V. eleguneniensis, a parasite more broadly distributed across the sub-Arctic. Comparing the thermal requirements and freeze tolerance between V. eleguneniensis and U. pallikuukensis is important in the context of their differential rates of range expansion in the Canadian Arctic Archipelago. Field surveys suggest slower colonization of V. eleguneniensis compared to U. pallikuukensis, despite the former being a multi-host parasite of caribou, muskoxen, and moose with a greater dispersal potential with migrating caribou [21], compared to the latter, which is specific to muskoxen, a non-migratory species. While various ecological and epidemiological factors might influence the transmission dynamics of these parasites, their thermal requirements and overwinter survival, and those of the gastropod intermediate hosts, probably dictate their northern range limit. All else being equal, parasite species with lower T 0 and DD are likely to expand their range more quickly and establish at a more northern latitude. Similarly, the parasite with greater L1 freeze tolerance has survival advantages at higher latitudes. Our findings of higher T 0 and DD of V. eleguneniensis compared to U. pallikuukensis [25] are consistent with their differential range expansion. As U. pallikuukensis and V. eleguneniensis do not differ in their ability to survive freezing, it is unlikely that freeze survival is contributing to the differential range expansion. In light of our findings and the preliminary results on the distribution of these parasites, it can be hypothesized that higher thermal requirements are contributing to the slower northward spread of V. eleguneniensis compared to U. pallikuukensis. This can be tested and validated by incorporating the parameters into models that determine the fundamental thermal niche for these parasites. However, our previous work, experimental studies and broad-based surveys suggest that U. pallikuukensis is much more fecund than V. eleguneniensis ( [38,45], Kafle, Kutz unpublished data). This may also contribute to more rapid range expansion of the former parasite. Other life-history traits such as infectivity, host abundance, and their interactions, are also important in the colonization success of invading parasites [17,46,47] and likely play a significant role here as well. The developmental parameters and freeze tolerance capabilities among protostrongylids with Arctic and sub-Arctic distributions are quite comparable -developmental thresholds lie around T 0 = 8 to 10°C and L1 are very resistant to lethal effects of freezing [29][30][31][32]. Within the narrow range, however, T 0 vary among parasite species and among intermediate hosts within a parasite species (Table 2). For instance, Elaphostrongylus rangiferi, a common parasite of wild and semi-domestic Table 4 Parameter estimates on the scale of the linear predictor from the top models (Table 3 [25,48]. Based on two studies, the thermal requirement for development to L3 (DD), however, seems to be constant within a parasite species regardless of the intermediate host; DD is the same in different intermediate host species for U. pallikuukensis and E. rangiferi [25]. The literature on the freezing survival of northern protostrongylids is scarce, but based on at least two well-designed studies it is clear that northern protostrongylids are highly cold tolerant. Shostak & Samuel [29] reported over 90% survival of P. odocoilei L1 suspended in water and frozen at -25°C for 280 days. In another study, Lorentzen & Halvorsen [49] did not observe reduced survival of E. rangiferi L1 when frozen with water at -20°C for 360 days, and a similar survival pattern was observed for L1 (in the feces) at -80°C, frozen for a similar time. Developmental thresholds and freezing survival of more temperate species seem to be comparatively lower. For example: Muellerius capillaris, a temperate protostrongylid, has a considerably lower T 0 (4.2°C) and is less tolerant to freezing despite being relatively closely phylogenetically related to U. pallikuukensis [50]. Parelaphostrongylus tenuis, another temperate species, also has a lower freezing tolerance than it northern counterparts -approximately 70% survival of L1 when frozen in feces at -15 to -20°C for 182 days [32]. In another study, Forrester & Lankester [30] found that 76% of P. tenuis L1 survive a constant temperature of -14°C in the lab when frozen in fecal pellets for four months. The variation in developmental and survival parameters among protostrongylid species is most likely a result of biological differences between both parasite and host species that are shaped by various ecological and evolutionary processes [25,50]. For instance, it has been suggested that higher T 0 for northern protostrongylids is an ecological adaptation to the Arctic conditions (e.g. long winter, short transmission period) [51]. Higher T 0 ensures that L1 enter the developmental phase only when the temperature is consistently warm, thereby increasing the chances of successful development to L3 in a single season. This further prevents mortality of developing larvae (survival of L1 is higher than L2 and L3 in overwintering intermediate host) and the intermediate hosts themselves over the winter [51]. The extreme freeze tolerance of U. pallikuukensis and V. eleguneniensis warrants further explorations on mechanisms of freeze resistance. For other nematodes, several biochemical and physiological mechanisms to survive freezing have been identified [8,[52][53][54]. Biochemical mechanisms include the synthesis of different proteins (commonly called antifreeze proteins) or cryoprotectants (e.g. trehalose, glycerol; [9,55,56]. Physiological mechanisms include the behavioral strategies to resist the lethal effects of freezing, which can be broadly classified into three types: (i) freeze avoidance (enables body fluid to remain liquid at temperature below their melting point); (ii) freeze tolerance (surviving some degree of ice nucleation in the body); or (iii) cryoprotective dehydration (desiccation at low temperatures to prevent freezing) [9]. We do not know the freezing survival strategy used by U. pallikuukensis and V. eleguneniensis L1, but because the survival was very high in water, they may employ a freeze-tolerance strategy and survive inoculative freezing, as described for other nematodes [52,57,58]. Conclusions The Arctic continues to warm at an unprecedented rate, driving the emergence and spread of pathogens [19,59,60] and thereby escalating the threats on the sustainability of native Arctic wildlife [61][62][63]. Since healthy wildlife is critical for food safety and security in the northern communities, it is vital to understand and anticipate future trends in emerging diseases in order to devise proactive management plans. Ecological models parameterized with physiological data capture the mechanisms behind observed patterns of distribution and abundance, and are particularly useful as predictive frameworks for investigating an organism's response to climate change [16,[64][65][66][67]. Our study provides key development and survival parameters of two parasites that are undergoing rapid range expansion in the Canadian Arctic, and these parameters can be incorporated into mechanistic models to describe and forecast the climate-driven range expansion of these parasites, as well as to understand the current and future trends of the infection dynamics. Predicting changes in disease dynamics and wildlife health under unprecedented climate change requires experimental studies such as ours to elucidate organisms' ecophysiology in order to parameterize mechanistic ecological models [18,33]. Availability of data and materials The datasets supporting the conclusions of this article are available in the Zenodo repository, https://doi.org/10.5281/zenodo.1193254.
v3-fos-license
2022-04-10T15:25:50.075Z
2022-04-08T00:00:00.000
248063008
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/land11040547", "pdf_hash": "d6d93062ca730bc08a5c32efb61765e8e8173c25", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1367", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f95e44dbf063aa737212611c7e739af009244aeb", "year": 2022 }
pes2o/s2orc
Evaluation of the Health Promotion Capabilities of Greenway Trails: A Case Study in Hangzhou, China As a type of green infrastructure, greenways are beneficial for walking and cycling and promote urban health and well-being. Taking the Qingshan Lake Greenway Phase One (QLG-I) Trail in the Lin’an District of Hangzhou city as an example and based on the accessibility of points of interest (POI) near the QLG-I Trail, a questionnaire investigation, and an importance performance analysis (IPA), in this paper, we construct a methodological framework to evaluate the healthpromotion capabilities of the QLG-I Trail, including three aspects: promoting the coverage of healthy travel, user attribute analysis, and user perceptions of the greenway for health promotion. The results show that the healthy travel range of the QLG-I Trail is small and that the users are mainly residents of nearby communities. Additionally, the main factors affecting users’ health-promoting behaviour are safety, cleanliness, and infrastructure services. Although the overall satisfaction with service quality was good (3.93), we found that the trail facilities did not meet the needs of the users. This study confirms that the QLG-I Trail provides community residents with a place for sports activities and supports health-promoting behaviour. Greenway facilities and the natural environment enhance this utility; however, promoting the coverage of healthy travel is limited by accessibility. Finally, we propose a traffic-organization optimization and improvement plan for the QLG-I Trail. The research results may help promote healthy activities on this type of greenway. Introduction Health promotion is a key element of new public health. Health promotion includes actions aimed at not only strengthening personal skills and abilities but also changing the social, environmental, and economic determinants of health to optimize their positive impact on public and personal health [1]. In this regard, the greenway trail presents an example. Under normal circumstances, a trail refers to any linear corridor that provides non-motorized access for entertainment, and it can take various forms [2]. Compared with other green open spaces, greenway trails have unique linear characteristics and connection attributes [3]. By supporting non-motorized transportation, such as walking and cycling, they provide safe and easy-to-access green spaces or facilities, making them a community health promotion programme [4][5][6]. As green infrastructures, greenways and trails not only provide people with a healthier way to travel but also produce many health-promoting benefits. For example, they increase physical activity [7], alleviate mental stress, and provide opportunities for relaxation and family reunification [8]. In terms of serving the community and its environment [9], greenway trails have a significant health-promotion-service function, and they are becoming increasingly popular all over the world [10]. In recent years, research on greenways and trails has become a hot topic in the fields of ecosystem services supply and demand [11][12][13], green infrastructure [14], and urban non-motorized transportation [15]. Such research has mostly focused on the commercial economy [16], ecosystem supply [17][18][19], regional culture [20], health and well-being [21], and entertainment value [22] brought by urban greenways and trails. Regarding the research on greenways and people's physical and mental health, natural experiments [23] and health-perception-assessment scales [24] have been used to prove whether there is a link between green scape and human health or to investigate the physical activity of users to explore the health-promotion-service functions of greenways and trails [25]. These research results have confirmed that greenways and their environments have a positive effect on human health. However, regarding the main beneficiaries of the greenway health promotion function, i.e., greenway users, it is necessary to explore their cognition and evaluation of this function. At the same time, obtaining real feedback on the use of greenways in a timely manner facilitates greenway builders in implementing bottom-up updates and improvements in health promotion. However, the current research on country greenways in this area is still relatively lacking. Taking into account this gap and based on existing research on the ecological, social and economic performance evaluation of the Qingshan Lake Greenway (QLG), we focus our follow-up research [26] on the evaluation of the health promotion services of greenways. The QLG is a coastal greenway located in the Lin'an District in Hangzhou city. It is an important regional green infrastructure built by the government. Since its completion in 2017, it has become an important place for leisure activities for the surrounding residents. In this article, we continue to use the QLG as the research object. Based on the connectivity and linear characteristics of the greenway, we evaluate the health promotion services that it provides and make suggestions to improve the health-promotion-service capabilities of country greenways. This research can also serve as a reference for the construction of similar greenways in China and even around the world. We mainly provide answers to the following research questions: (1) How large must the service area of the greenway trail be to promote healthy travel? (2) Who are the users of the greenway trail? What health-promoting behaviours do users engage in on the greenway? (3) What do trail users think are the greenway factors that affect the promotion of healthy behaviours? (4) How satisfied are users with the attributes of the greenway trail that provide health promotion services? Before introducing the research methods, this article reviews the literature on greenway trails and performance-evaluation methods. This literature review is followed by the results of the case study, the discussion, and the conclusion. Literature Review A large number of studies have shown that greenway trails provide the basis for important daily fitness activities for the surrounding residents. Price et al. analysed the demographic characteristics of greenway users, the reasons for using the Swamp Rabbit Trail, and the perception of the characteristics of the trail through online surveys, and they found that the greenway increased the opportunities for nearby residents to exercise [7]. Although greenway facilities have great potential to support sports activities, their use can be affected by the accessibility of the area around the greenway [4]. In particular, the demand for walking is often affected by the attractiveness, comfort, safety, and accessibility [27]. Studies have used global positioning system (GPS) and geographical information system (GIS) technologies to predict the degree of use of greenways by measuring their accessibility, proximity, and opportunities [4]. Open digital map platforms, such as Google Maps, provide a new way to directly obtain travel cost indicators. They have been applied for urban traffic accessibility analysis [28]; however, they have been less used for greenway trail accessibility analysis. Walking and cycling are considered to be among the healthiest ways to travel [29], and the optimal amount of time that it should take residents to walk from their homes to city parks is within 15 min [30]. A survey in London also showed that the size, shape, and density of green spaces affect walking activities. Parks near small parks or retail areas are significantly related to walking [31]. Therefore, we take the entrance of the first phase of the QLG Trail as the origin and take the area covered by 15 min (inclusive) of walking or cycling as the service scope of the QLG Trail to promote people's healthy travel, with the aim of encouraging people to choose healthier ways of travel. In addition, people's awareness of the health promotion services of greenway trails is crucial, and obtaining information on their awareness will facilitate greenway planners or managers in optimizing trail facilities in a targeted manner. Many studies have used questionnaires, field observations, interviews, statistical analysis, and post-use evaluation (POE) to obtain feedback and suggestions from the perspective of users after a greenway has been put into use [32][33][34]. In addition to the accessibility of greenway trails [35], Roe et al. found that people have different perceptions of urban green spaces that affect health because of the differences in their own health status, race, gender, and age [36]. By constructing the Scottish Walkability Assessment Tool (SWAT), Millington et al. found that indicators, such as the destination, safety, and aesthetics, were more reliable and can be used as influencing factors affecting people's walking [37]. However, in terms of health promotion, these studies did not conduct a comparative study of the importance of and satisfaction with the attributes of greenway trails, which makes it difficult for greenway managers to determine which aspects are the most important. Therefore, we use the importance performance analysis (IPA) model to conduct research in this area. The purpose is to understand the attributes of greenways and trails associated with health promotion services and to collect user opinions through on-site surveys and questionnaires. Martilla and James originally proposed the IPA model in 1977. This simple and practical method can help operators understand customer satisfaction with products or services and identify areas where service quality should be improved [38]. The IPA model has been widely used to evaluate park leisure services [39,40]; however, it has been less commonly used in greenway performance evaluations. Our study used an improved IPA model [41], and the space was divided into four quadrants as shown in Figure 1. Importance is used as the horizontal axis, satisfaction is used as the vertical axis, and the total average value of importance and satisfaction (x; y) is used as the quadrant axis. This method can be used to understand the evaluation results more clearly and quickly and to provide intuitive construction reference data for the relevant departments and decision makers. amount of time that it should take residents to walk from their homes to city parks is within 15 min [30]. A survey in London also showed that the size, shape, and density of green spaces affect walking activities. Parks near small parks or retail areas are significantly related to walking [31]. Therefore, we take the entrance of the first phase of the QLG Trail as the origin and take the area covered by 15 min (inclusive) of walking or cycling as the service scope of the QLG Trail to promote people's healthy travel, with the aim of encouraging people to choose healthier ways of travel. In addition, people's awareness of the health promotion services of greenway trails is crucial, and obtaining information on their awareness will facilitate greenway planners or managers in optimizing trail facilities in a targeted manner. Many studies have used questionnaires, field observations, interviews, statistical analysis, and post-use evaluation (POE) to obtain feedback and suggestions from the perspective of users after a greenway has been put into use [32][33][34]. In addition to the accessibility of greenway trails [35], Roe et al. found that people have different perceptions of urban green spaces that affect health because of the differences in their own health status, race, gender, and age [36]. By constructing the Scottish Walkability Assessment Tool (SWAT), Millington et al. found that indicators, such as the destination, safety, and aesthetics, were more reliable and can be used as influencing factors affecting people's walking [37]. However, in terms of health promotion, these studies did not conduct a comparative study of the importance of and satisfaction with the attributes of greenway trails, which makes it difficult for greenway managers to determine which aspects are the most important. Therefore, we use the importance performance analysis (IPA) model to conduct research in this area. The purpose is to understand the attributes of greenways and trails associated with health promotion services and to collect user opinions through on-site surveys and questionnaires. Martilla and James originally proposed the IPA model in 1977. This simple and practical method can help operators understand customer satisfaction with products or services and identify areas where service quality should be improved [38]. The IPA model has been widely used to evaluate park leisure services [39,40]; however, it has been less commonly used in greenway performance evaluations. Our study used an improved IPA model [41], and the space was divided into four quadrants as shown in Figure 1. Importance is used as the horizontal axis, satisfaction is used as the vertical axis, and the total average value of importance and satisfaction (x �; y �) is used as the quadrant axis. This method can be used to understand the evaluation results more clearly and quickly and to provide intuitive construction reference data for the relevant departments and decision makers. Following the research of Tang et al. [26,42], we constructed the methodological framework of this research based on the functional characteristics of greenways and trails, i.e., green travel, sports activities, and leisure and entertainment, combined with reachability analysis, user characteristic analysis, and the IPA model ( Figure 2). This includes service scope analysis, user analysis, and the perceptual evaluation of trail attributes in terms of health promotion. First, we used the Amap open platform (https://lbs.amap.com/ (accessed on 12 April 2021) for the nearby search and path-planning functions, to obtain point of interest (POI) data and time data for walking and cycling, and to combine the improved isochronous circle analysis method for green space traffic accessibility [43] to estimate the scope of greenway trails to promote healthy travel services. (accessed on 12 April 2021) for the nearby search and path-planning functions, to obtain point of interest (POI) data and time data for walking and cycling, and to combine the improved isochronous circle analysis method for green space traffic accessibility [43] to estimate the scope of greenway trails to promote healthy travel services. Among these, the sample points were selected to more accurately obtain the time required for nearby residents to walk or ride to the entrance of the trail to perform a more accurate analysis of the scope of healthy travel services compared with the buffer zone. An isochronous circle refers to the range covered by the distance that can be reached within a specific amount of time by selecting different modes of transportation from a certain point [44] and is commonly used in urban infrastructure accessibility analysis, urban traffic analysis, etc. [45][46][47]. The above method combines the tools of the Amap open platform and the isochronous analysis model to reduce the survey time and is more in line with users' real travel situations. Then, through direct observation, interview records and questionnaires, we can understand the usage habits and health behaviours of different service groups. We administered an IPA scale to obtain user perceptions in terms of health promotion. Based on the collected user feedback data, we can provide users with better services to meet their needs in terms of health promotion. Study Area The research results of this article are based on empirical research on the QLG. In the past three decades, similar to other cities, Hangzhou city, in Zhejiang Province, China, has Among these, the sample points were selected to more accurately obtain the time required for nearby residents to walk or ride to the entrance of the trail to perform a more accurate analysis of the scope of healthy travel services compared with the buffer zone. An isochronous circle refers to the range covered by the distance that can be reached within a specific amount of time by selecting different modes of transportation from a certain point [44] and is commonly used in urban infrastructure accessibility analysis, urban traffic analysis, etc. [45][46][47]. The above method combines the tools of the Amap open platform and the isochronous analysis model to reduce the survey time and is more in line with users' real travel situations. Then, through direct observation, interview records and questionnaires, we can understand the usage habits and health behaviours of different service groups. We administered an IPA scale to obtain user perceptions in terms of health promotion. Based on the collected user feedback data, we can provide users with better services to meet their needs in terms of health promotion. Study Area The research results of this article are based on empirical research on the QLG. In the past three decades, similar to other cities, Hangzhou city, in Zhejiang Province, China, has been experiencing rapid and prosperous urban construction. In this regard, the construction of greenways has been the focus of Hangzhou's green infrastructure construction. As of Land 2022, 11, 547 5 of 21 2020, the city had built approximately 3713 km of greenways, and the selection of the most beautiful greenways is held every year. The QLG Trail has a length of 42.195 km and consists of 12 main entrances and 13 service points connecting the surrounding communities and connecting to the public transportation system. The design concept of the QLG, "returning the lake to the people", is to provide residents with places to experience nature, walk, jog, and cycle for fitness and to increase residents' opportunities for outdoor activities to achieve the purpose of health promotion. The greenway was built in three phases. Among them, the Qingshan Lake Greenway Phase One (QLG-I) Trail focuses on culture, ecology, and sports. The standard width is 4 metres, and the total length is approximately 10 km (Figure 3). It was completed and put into use in 2017. In the same year, it was named one of the "Most Beautiful Greenways in Zhejiang". The QLG-I Trail is the main passage for residents of Lin'an city to enter Qingshan Lake National Forest Park. Compared with the second and third phases, it has a longer use time and a higher utilization rate. Land 2022, 11, x FOR PEER REVIEW 5 of 22 been experiencing rapid and prosperous urban construction. In this regard, the construction of greenways has been the focus of Hangzhou's green infrastructure construction. As of 2020, the city had built approximately 3713 km of greenways, and the selection of the most beautiful greenways is held every year. The QLG Trail has a length of 42.195 km and consists of 12 main entrances and 13 service points connecting the surrounding communities and connecting to the public transportation system. The design concept of the QLG, "returning the lake to the people", is to provide residents with places to experience nature, walk, jog, and cycle for fitness and to increase residents' opportunities for outdoor activities to achieve the purpose of health promotion. The greenway was built in three phases. Among them, the Qingshan Lake Greenway Phase One (QLG-I) Trail focuses on culture, ecology, and sports. The standard width is 4 metres, and the total length is approximately 10 km ( Figure 3). It was completed and put into use in 2017. In the same year, it was named one of the "Most Beautiful Greenways in Zhejiang". The QLG-I Trail is the main passage for residents of Lin'an city to enter Qingshan Lake National Forest Park. Compared with the second and third phases, it has a longer use time and a higher utilization rate. The QLG-I Trail has four entrances and exits, namely, Wanghu Park (point A), Qianjin Wetland Park (point B), Qianwang Sculpture Square (point C), and Great Lawn Park (Point D). It is also an important landscape node along the greenway. These points divide the QLG-I Trail into three sections. The entrance of the AB section of the trail is close to the city's road transportation hub and is the closest to the city, and there are dense residential areas nearby. This section of the trail is located on the lake, and there are large wetlands and spruce forests nearby. This section of the trail was built on Beacon Hill, with dense vegetation on both sides, and it extends to the central area of the Qingshan Lake Scenic Area. It is the longest of the three sections. POI Data and Travel Time Cost Data POI data and travel time data were obtained from the Amap open platform and captured by Python tools. POI data were obtained through peripheral searches, and travel The QLG-I Trail has four entrances and exits, namely, Wanghu Park (point A), Qianjin Wetland Park (point B), Qianwang Sculpture Square (point C), and Great Lawn Park (Point D). It is also an important landscape node along the greenway. These points divide the QLG-I Trail into three sections. The entrance of the AB section of the trail is close to the city's road transportation hub and is the closest to the city, and there are dense residential areas nearby. This section of the trail is located on the lake, and there are large wetlands and spruce forests nearby. This section of the trail was built on Beacon Hill, with dense vegetation on both sides, and it extends to the central area of the Qingshan Lake Scenic Area. It is the longest of the three sections. POI Data and Travel Time Cost Data POI data and travel time data were obtained from the Amap open platform and captured by Python tools. POI data were obtained through peripheral searches, and travel time data were obtained through path planning. The walking speed set in the path planning was approximately 4.2 km/h, and the cycling speed was approximately 12 km/h. First, with the geographical coordinates of the QLG-I Trail, four sites (Points A, B, C, and D) were used as the centre of circles with a radius of 10 km to find the entrances surrounding commercial, residential, and transportation facilities; to find the geographical coordinates, addresses, and other POI data; and to remove invalid points under construction. Second, we used the path-planning service and took the geographical coordinates of the POI data obtained in the first step as the sample point (starting point) and the geographical coordinates of the four sites of the QLG-I Trail as the target point (end point). Third, we obtained all the time spent on each path from the sample point to the target point by path planning with walking and cycling and with the time cost attribute of each sample point. Questionnaire Design and Site Investigation The survey time was from June to July 2020, and it had been 3 years since the opening of the QLG-I Trail. The survey was divided into two stages. The first stage mainly investigated the use of the trail. During the period from 7:00 a.m. to 9:00 p.m. on 13 June, we observed and recorded users' behaviour, habits, and activity types without interference. Then, a questionnaire was designed based on the results of the observation records to evaluate the user attributes, the basic use of greenways, and the IPA evaluation scale for the trail. The IPA evaluation scale was based on The Construction Technical Guidelines of Hangzhou Greenway System (Pilot edition) and QLG planning and design data, and it included five types of evaluation factors-road quality, supporting facilities, the natural environment, regional cultural characteristics [7,34,[48][49][50], and management and maintenance-for a total of 25 index factors (Table A1). The second stage was the distribution and collection of questionnaires. Researchers randomly distributed 20 questionnaires on the QLG-I Trail in the morning and evening of 16-17 June for the pre-survey and asked the interviewees whether the questionnaire content was reasonable. The questionnaire data obtained at this time were used only to modify the questionnaire, not as the main data of analysis in the research. Then, we chose 5 days (sunny weekends or working days) from June to July to formally distribute the questionnaire on site and ensured that the time and place were the same as those of the pre-survey. A total of 243 copies were distributed, of which 201 valid questionnaires were obtained, for an effective response rate of approximately 82.7%. Data Analysis Kriging is a regression algorithm for the spatial modelling and prediction (or interpolation) of random processes or random fields based on the covariance function [51]. In a specific random process, the kriging method can give the best linear unbiased prediction (BLUP); thus, it is often used to estimate the phenomenon of point data distribution on a surface [52]. In this study, the obtained sample points with time cost attributes and target point data were used to create a buffer with a radius of 10 km for the target point as the estimated reach of 15 min of cycling. Then, we used kriging interpolation to convert the time cost attribute value of the sample point into the raster attribute value of the target point and drew the raster map of walking and cycling in different time periods of four target points in the circular buffer, which was used to analyse the service situation of the QLG-I Trail for promoting healthy travel. After the survey, questionnaire data obtained in paper form were statistical analysed. Descriptive statistics were used to analyse the demographic characteristics and basic use of the trail. Based on the IPA evaluation scale data on the QLG-I Trail, we conducted an overall reliability test of the 201 questionnaires through the internal consistency coefficient (Cronbach's alpha coefficient [53] or the α value). The α value of those questionnaires was 0.922, i.e., greater than 0.7, indicating that the questionnaire reliability test score was high. The questionnaire design was reasonable and had good reliability. 0.922, i.e., greater than 0.7, indicating that the questionnaire reliability test score was high. The questionnaire design was reasonable and had good reliability. The Walking and Cycling Accessibility of the QLG-I Trail To analyse the difference in the accessibility of different target points, we drew isochronous grid maps of walking and cycling at four target points (Figures 4 and 5), dividing the time cost data on the four target points into five time periods: 0-15, 15-30, 30-45, 45-60, and greater than 60 min. Both Figures 4 and 5 reflect the distribution of the actual time spent travelling from the four target points to each sample point. The isochronous shape of each target point is not theoretically circular, and the spacing is unevenly distributed. Based on the distribution of the sample points and the curvature of the isochrones, the distribution of the surrounding terrain and road network can also be inferred. Figure 4 shows that the area of the circle from point A to point D within 1 h of walking gradually decreases, while the accessibility gradually weakens. Clearly, the area reachable within 15 min of walking accounts for a relatively small proportion of the circular buffer zone. Figure 5 shows that the area of the circle from point A to point D within 1 h of cycling gradually decreases. The reachable range within 1 h of cycling at the three points A, B, and C basically covers the circular buffer zone. The area that can be reached by cycling within half an hour is also wider. Taken together, the results show that the accessible range of cycling within 1 h is significantly greater than that of walking. The circle for 45 min Figure 4 shows that the area of the circle from point A to point D within 1 h of walking gradually decreases, while the accessibility gradually weakens. Clearly, the area reachable within 15 min of walking accounts for a relatively small proportion of the circular buffer zone. Figure 5 shows that the area of the circle from point A to point D within 1 h of cycling gradually decreases. The reachable range within 1 h of cycling at the three points A, B, and C basically covers the circular buffer zone. The area that can be reached by cycling within half an hour is also wider. Taken together, the results show that the accessible range of cycling within 1 h is significantly greater than that of walking. The circle for 45 min of walking is similar in shape and size to that for 15 min of cycling, indicating that the accessibility of walking for 45 min and that of cycling for 15 min are relatively close. The Promoting Healthy Travel Range of the QLG-I Trail This study first extracted the isochrones from the sample points to the four target points and calculated the isochronous area of the walking and biking travel time within 15 min (excluding water areas, such as Qingshan Lake) as the actual service scope (S) of the four target points for promoting healthy travel. Second, we separately counted the total number of sample points of the four target points within the 15 min circle as the number of health-promotion-service units (n). Finally, the sample points in the buffer of the four target points were gathered, and 2116 valid sample points (Table 1) were obtained after removing duplicate points. At the same time, we superimposed the reachable range of walking and cycling for each target This study first extracted the isochrones from the sample points to the four target points and calculated the isochronous area of the walking and biking travel time within 15 min (excluding water areas, such as Qingshan Lake) as the actual service scope (S) of the four target points for promoting healthy travel. Second, we separately counted the total number of sample points of the four target points within the 15 min circle as the number of health-promotion-service units (n). Finally, the sample points in the buffer of the four target points were gathered, and 2116 valid sample points (Table 1) were obtained after removing duplicate points. At the same time, we superimposed the reachable range of walking and cycling for each target point as the scope for promoting health travel of the QLG-I Trail ( Figure 6) and calculated the total service area and the total number of sample points. Compared with other target points, point A has the best accessibility and widest coverage of promoting healthy travel service, whether by walking or cycling (Table 1). In terms of walking, the accessibility of point C is the worst, and the scope of the promoting healthy travel service is small. However, the results for cycling show that point D is the worst. The largest difference in service area between the two modes of travel is at point B, where walking accessibility is lower, while cycling accessibility is higher. point as the scope for promoting health travel of the QLG-I Trail ( Figure 6) and calculated the total service area and the total number of sample points. Compared with other target points, point A has the best accessibility and widest coverage of promoting healthy travel service, whether by walking or cycling (Table 1). In terms of walking, the accessibility of point C is the worst, and the scope of the promoting healthy travel service is small. However, the results for cycling show that point D is the worst. The largest difference in service area between the two modes of travel is at point B, where walking accessibility is lower, while cycling accessibility is higher. According to the results, the walking area of the QLG-I Trail within 15 min is approximately 4.35 km 2 , and the cycling area is approximately 29.45 km 2 . In general, the accessibility of the four entrances of the QLG-I Trail within 15 min is as follows: A > B > C > D. The walking area of the QLG-I Trail within 15 min is mainly the nearby residential areas and traffic stations along the line. The cycling area is mainly concentrated in some central urban areas and towns on the west side of Qingshan Lake. Table 2 shows that, in terms of the proportion of respondents, there were slightly more women (51.2%) than men (48.8%); however, the difference was small. In terms of age groups, there were people of all ages. The respondents were mainly young and middle-aged people aged 18-55 (85.6%), and 13.4% of the respondents were over 55 years old. Users were primarily local residents from residential areas around the greenway and downtown (88.6%). There were a smaller number of people from areas outside of Hangzhou city (4%). Table 2 shows that, in terms of the proportion of respondents, there were slightly more women (51.2%) than men (48.8%); however, the difference was small. In terms of age groups, there were people of all ages. The respondents were mainly young and middleaged people aged 18-55 (85.6%), and 13.4% of the respondents were over 55 years old. Users were primarily local residents from residential areas around the greenway and downtown (88.6%). There were a smaller number of people from areas outside of Hangzhou city (4%). User Attributes and Use Characteristics Regarding the frequency of use, 37.8% of respondents used the trail multiple times a week, and 24.9% used it daily. The respondents' main mode of transportation was walking (50.2%), followed by cycling (27.9%) and driving (17.4%). Due to the imperfect public transportation facilities near the QLG-I Trail, fewer people used public transportation (4.5%). Daily visits were mostly concentrated in the hours of 7:00 a.m.-9:00 a.m. and 5:00 p.m.-8 p.m., accounting for 33.3% and 46.3%, respectively. Health Awareness and Health-Promoting Behaviour of Users The QLG-I Trail users' health status survey was based on the respondents' opinion of whether maintaining their health status required physical exercise, which was divided into five levels: exercise being very necessary, necessary, comparatively necessary, normal, and no need (with cognitive intensity decreasing in order). The results show ( Table 3) that 33.83% of the respondents think that they urgently need more physical exercise, half of the respondents (47.76%) think that they need physical exercise, and only a small number of people (1.49%) think that they do not need it. Most of the interviewees had relatively high self-health awareness, which also reflects their pursuit of a healthier lifestyle. Through on-site observation and the distribution of questionnaires, this paper counted the types and frequency of the health-promoting behaviours of the respondents (Table 4). Behaviours, such as physical exercise, social interaction, entertainment and leisure, and natural experience, which promote people's physical or mental health, are termed healthpromoting behaviours. Overall, physical exercise (40.8%) and natural experience (28.1%) occurred frequently, while social interaction (8.1%) and entertainment and leisure (15.3%) activities occurred less frequently. According to the analysis of health-promoting behaviours (Table 5), within the 95% confidence interval, there were significant differences in health-promoting behaviours among users in different places of residence. People who lived within a close distance tended to exercise, while those who lived far away tended to choose a natural experience. However, the p values of different genders and age groups were all greater than 0.05, indicating that there was no significant difference in the probability of the occurrence of health-promoting behaviours between them. (Table 6), indicating that the respondents had a high perception of the importance of these functions (between "average" and "very important"). Moreover, their standard deviations were all less than 1.2, indicating that the respondents had relatively small deviations in their perceptions of and attitudes towards the importance of these functions. The top five index factors with average scores were S4, S21, S25, S2, and S5. The highest average score for the evaluation factors was for management and maintenance (4.47), while the second highest was for road quality (4.43). These scores are similar; thus, the interviewees believed that the factors that affect the health-promoting behaviour of QLG-I Trail users are mainly concentrated in them. In addition, the natural environment of the QLG-I Trail obtained a high score (4.37). In comparison, the S17, S18, and S19 index factors of the regional cultural characteristics were considered to be the least important influencing factors because their comprehensive average score was low (3.63). Satisfaction Evaluation of the QLG-I Trail The average satisfaction score of the 25 index factors was between 2.99 and 4.39 (Table 6). Compared with the importance perception of factors that affect health promotion, the overall score for satisfaction evaluation was lower, indicating that most interviewees believed that the actual performance of the QLG-I Trail did not meet their expectations. The index factors of the natural environment had an average score of 4.2 points or more, and their standard deviations were relatively low, indicating that users were satisfied with the natural environment around Qingshan Lake. Conversely, the three lowest-ranking index factors were S7, S9, and S10. The high occupation of parking lots, inconvenient bicycle rental facilities, and few toilets and benches offered users a poor recreational experience, resulting in the lowest satisfaction evaluation score for the supporting facilities indicators (3.46). IPA Chart In this study, we drew an IPA scatter diagram (Figure 7) to analyse the impact of the importance and satisfaction with the greenway factor of the QLG-I Trail for users' health-promoting behaviour. experience, resulting in the lowest satisfaction evaluation score for the supporting facilities indicators (3.46). IPA Chart In this study, we drew an IPA scatter diagram (Figure 7) to analyse the impact of the importance and satisfaction with the greenway factor of the QLG-I Trail for users' healthpromoting behaviour. (Figure 7). The average scores are higher than the overall average importance score and average satisfaction score. The evaluation results of this quadrant show that the natural environment, road quality, and management and maintenance of the QLG-I Trail were more prominent overall, and user satisfaction was relatively high. Notably, however, the average value of the index factors for satisfaction in this quadrant was lower than the average value for importance. Therefore, while the QLG-I Trail continues to maintain its advantages, managers should tap its development potential and continuously improve the corresponding functions. Quadrant 2 is the Possible Overkill area, and it includes six index factors. As shown in Figure 7, their average scores were lower than the overall average importance score but higher than the overall average satisfaction score. The evaluation results of this quadrant showed that users generally have a high degree of recognition of these index factors; however, the impact on users' health-promoting behaviours is not considered very important, especially the regional cultural characteristics of the QLG-I Trail. In the opinion of the interviewees, these characteristics focus on showing the cultural features of the Lin'an District, which increases residents' spiritual identity; however, the promotion of healthy behaviour is not apparent. Quadrant 3 is the Low Priority area, and it includes three index factors. Additionally, these average scores were lower than the overall average importance score and average satisfaction score. The results of this quadrant show that parking lots, commercial service facilities, and public health were not important indicators for promoting the health service capacity of the QLG-I Trail and can be appropriately controlled. However, this does not mean that the functions of these index factors are not taken seriously. Their average importance scores were above 3.5, and there is still much room for improvement. Moreover, Figure 7). The average scores are higher than the overall average importance score and average satisfaction score. The evaluation results of this quadrant show that the natural environment, road quality, and management and maintenance of the QLG-I Trail were more prominent overall, and user satisfaction was relatively high. Notably, however, the average value of the index factors for satisfaction in this quadrant was lower than the average value for importance. Therefore, while the QLG-I Trail continues to maintain its advantages, managers should tap its development potential and continuously improve the corresponding functions. Quadrant 2 is the Possible Overkill area, and it includes six index factors. As shown in Figure 7, their average scores were lower than the overall average importance score but higher than the overall average satisfaction score. The evaluation results of this quadrant showed that users generally have a high degree of recognition of these index factors; however, the impact on users' health-promoting behaviours is not considered very important, especially the regional cultural characteristics of the QLG-I Trail. In the opinion of the interviewees, these characteristics focus on showing the cultural features of the Lin'an District, which increases residents' spiritual identity; however, the promotion of healthy behaviour is not apparent. Quadrant 3 is the Low Priority area, and it includes three index factors. Additionally, these average scores were lower than the overall average importance score and average satisfaction score. The results of this quadrant show that parking lots, commercial service facilities, and public health were not important indicators for promoting the health service capacity of the QLG-I Trail and can be appropriately controlled. However, this does not mean that the functions of these index factors are not taken seriously. Their average importance scores were above 3.5, and there is still much room for improvement. Moreover, the satisfaction with the index factors in this quadrant was low. If they are ignored, they may lead to poor overall performance. Quadrant 4 is the Concentrate Here area, and it includes six index factors. These average scores were all higher than the overall average importance score and lower than the overall average satisfaction score. The evaluation results of this quadrant show that users believed that the safety protection facilities, humanized equipment, and cleanliness of the environment of the QLG-I Trail were more important for their health-promoting behaviour; however, they were not very satisfied with the current service quality. Therefore, managers should pay attention to the service quality of the indicator factors in this quadrant and provide a more comfortable and safer greenway environment to promote healthy behaviour. Research Contribution This research focuses on the health-promotion-service capabilities of greenways. It evaluates the health promotion services provided by the QLG-I Trail based on accessibility and user perceptions and evaluations. It makes three unique contributions to the research on country greenways. As the first contribution, this study assessed the scope of the health promotion services of greenway trails based on the accessibility of walking and cycling. The path-planning function provided by open digital map platforms improves the ambiguity of the scope obtained by using buffer zone analysis in previous studies [54,55], which is helpful in determining the service scope of greenways and even other green infrastructures. As the second contribution, this research studied the health promotion services of country greenways from the perspective of user perceptions. Previous studies on country greenways have focused more on ecological services or social benefits [56][57][58], ignoring the role of greenways in promoting health. Finally, by investigating user perceptions of the attributes of the QLG-I Trail, this study contributes by exploring which factors affect the occurrence of health-promoting behaviours in a country greenway. Promotion of Healthy Travel The research results show that there were clear differences in the accessibility of the main entrances and exits of the QLG-I Trail. This discovery was also made in Coutts' research. By segmenting multiple greenways and calculating the population density, landuse mix, and number of greenway users in the area, he concluded that the accessibility of each part of the greenway was different and was affected by a variety of factors [4]. This difference is related to the linear characteristics of the greenway, which is fundamentally different from a large area of parkland [59,60]. This difference can easily cause users to concentrate on a certain section, while other sections are less used, which further leads to the idleness or waste of greenway resources in certain sections, regardless of whether they are natural resources or infrastructure resources. Another study by Coutts also showed that the green road sections that intersect the park and the green road sections where commercial land is concentrated had the largest usage, and the accessibility differences were more obvious [61]. Therefore, a greenway should not be regarded as a whole in terms of reachability analysis; instead, the reasons for the differences in each section should be analysed to take targeted optimization measures. In this case study, point A is close to the main urban area and is connected to the main regional transportation routes with the best walking and cycling accessibility. Points C and D are far from the main transportation routes, and the surrounding terrain is steep. There are several large residential communities distributed in the area, and the population density is not low; however, the community is closed and cannot be traversed. The transportation cost of reaching the greenway is relatively high; thus, the accessibility is poor. In general, residents go to greenways located in urban areas mainly by walking, while it takes a long time to go to country greenways, which can be easily reached by means of bicycles or motor vehicles [62]. Therefore, based on this difference, greenway builders should allocate infrastructure differently to avoid resource shortages or waste. Furthermore, greenway builders should consider how entrances and exits with poor accessibility should be better integrated into the regional transportation system to guide the average distribution of users on the greenway. In addition, in this case, we noticed that Qingshan Lake, as a country greenway connecting urban and rural areas, has poorly accessible entrances and exits. It is mainly distributed in rural areas far from the urban end, which also reveals that the degree of convenience of transportation systems in urban and rural areas is different. A review study by James showed that the accessibility of green spaces is frequently related to socio-economic status. Groups with lower socio-economic status obtain less green space but may benefit more from the use of green spaces. Therefore, the rational allocation of green space elements and the enhancement of the accessibility and fairness of marginal green spaces can further alleviate the differences in socio-economic health [63]. Our strategy is based on this view. In addition, the reachability analysis showed that QLG-I Trail's pedestrian and bicycle areas within 15 min cover only the surrounding communities and towns, and the trail's service scope is limited. The user survey results also showed that the audience of the trail is mainly residents of nearby communities, and approximately 78% of users visit on foot or by bike. This result is relatively consistent with the preference research results of Dorwart et al. and Keith et al. on the use of greenway trails, with trail visitors being mainly nearby residents [62,64]. Therefore, greenway managers can consider constructing a network of non-motorized transportation systems that extend to every residential community and improve the connectivity and matching between the country greenway space and the urban non-motorized commuter space to adapt to and encourage users to travel in these healthy ways [65,66]. At the same time, greenway managers should consider benefiting a wider range of people. Combining visiting methods and the fact that fewer people use public transportation, sufficient parking lots for bicycles and motor vehicles can be built near the main entrances and exits, and the connection between the main entrances and exits of a greenway and public transportation can be strengthened to provide more convenient travel services for users who are far away [21,49]. Health-Promoting Behaviour In the user questionnaire survey, 91% of visiting users clearly stated that they need to exercise, which shows that the health awareness and fitness willingness of visiting users are very strong, and the results of the health-promoting behaviour survey also showed that physical exercise was their main activity in Qingshan Lake. Therefore, the ability of the QLG to provide health-promoting behaviour services should be the top priority for builders and managers to consider, especially the construction of facilities for physical exercise. The survey results of the questionnaire regarding the users' place of residence showed that users who live closer to the trail liked physical exercise, while users who lived farther away were more inclined towards natural experience activities, indicating that the purpose of visiting and distance are related. This result is an unexpected discovery. Previously, results relating the purpose and distance of the visit had not been seen in the greenway-related literature [67][68][69], and the specific reasons warrant in-depth follow-up research. Taking into account the preferences of users at different distances, greenway planners can set up locker rooms, bag storage places, and other supporting facilities in sections close to surrounding residential areas, set up open spaces, such as paved squares or provide better pavement for physical exercise, which is also conducive to better group fitness activities, such as the group square dancing that middle-aged and elderly women in China love to do [70]. In sections away from surrounding residential areas, high-quality scenic sections can be set up for people to experience nature, such as meditation seats and nature science signs [71]. In addition, the survey results on users' visiting time showed that the trail had a large traffic flow between 5:00 p.m. and 8:00 p.m., which is consistent with Xiong et al.'s research results [42], proving that the summer evening is the peak period for greenway trail users. Many surveys and studies have shown that safety (in various forms) should be the top priority of greenway managers and planners [34], and a considerable amount of research has also focused on safety issues in green space management [72]. Therefore, taking into account the users' demand for greenway trail health promotion services at night, greenway managers can adopt smart city design methods to integrate landscape lighting systems, environmental music systems, security systems, and security patrol systems to supplement a complete sports identification system and multimedia publishing system, which can play a role in spatial guidance, preventing sports injuries, and even reducing criminal acts [34]. Trail Attributes Affecting the Health Promotion of Country Greenways With regard to the factors that affect the intervention of greenways to promote healthy behaviours, according to the IPA scale survey results, the natural environment is a factor that users consider to be of high importance and with which they are highly satisfied, indicating that the complete and continuous natural environment in linear spaces is an important factor for attracting users to outdoor sports and releasing psychological stress. Research has also shown that continuous green space has a positive effect on reducing stress and improving health [73]. Therefore, we should continue to maintain the original rich natural environmental resources of greenway trails. Furthermore, we should pay attention to avoiding damage to the linear landscape caused by certain infrastructure. In particular, we should pay attention to the cutting of the continuity of greenways by transportation facilities. Three-dimensional overhead can be used to eliminate the damage caused by park entrances and exits and road intersections to the linear spatial continuity of greenways and to construct a closed loop of scenery without breakpoints throughout the process. Ahn et al. also confirmed the important role of such measures in reducing air pollution [74]. In addition, surveys of user visits have shown that factors, such as protective barriers, cleanliness, fire safety, trail comfort and barrier-free access are important factors that trail users believe promote healthy behaviour, while historical allusions, traditional construction techniques and the application of local materials have not had much impact on promoting healthy behaviour. At the same time, the survey results also showed that users are not very satisfied with the QLG-I Trail's parking lot, lighting system, and cleanliness. In particular, the six indicators (protective fences, wheel chair accessibility, the lighting system, cleanliness, security guards on patrol, and fire safety) in quadrant 4, the Concentrate Here area, mainly involve two aspects: safety protection and infrastructure for caring for vulnerable groups. An IPA evaluation of the Atlanta Loop Greenway in Georgia, USA also found that residents were dissatisfied with various forms of safety elements on greenways [34], and the evaluation of the use patterns of and design preferences for sports activities for elderly people on the Berlin Creek Greenway also reflected care for vulnerable groups [64]. Therefore, in addition to adopting the complete service facilities mentioned earlier, attention should be paid to night lighting and to ensuring the continuity of routes. Greenway managers should also pay attention to the special needs of vulnerable groups and humanized construction details to improve the safety, fairness, and comfort of greenway use (for example, ensuring that the flatness and elasticity index of road surface materials meet the needs of the physical characteristics of children and elderly people) [75]. For the vegetation on both sides of a trail, tree species that do not cause respiratory diseases should be chosen. In addition, facilities for disabled people who can perform physical activities alone can be added [76], and it is important to ensure their safe use. Public Participation In this study, through the reachability analysis of the QLG-I Trail and user surveys and interviews, greenway users' preferences and health-promoting behaviours were investigated. The results of the study reflect that some of the greenways of the QLG-I Trail lack continuity, management and maintenance are not in place, and there are additional issues. This bottom-up survey and research method can obtain real feedback from greenway users on a greenway, and the research results and discussion can help greenway managers optimize greenways in a targeted manner. The research by Lee et al. showed that public participation can strengthen people's sense of responsibility to the local area and make it easier to meet users' psychological expectations and gain users' recognition [50]. Therefore, the mechanism of public participation is very effective and deserves to be applied and promoted in the subsequent greenway operation and renewal process. In research related to greenway planning and management, a considerable amount of research explores how to effectively apply this mechanism. By investigating the post-use evaluation of greenway users, Chi et al. analysed whether the top-down strategy matched the daily lives of residents [77]. Lim et al. encouraged public participation by organizing activities and publishing development strategies [78]. For example, through "voluntary art activities" in regenerating cultural and artistic spaces, the public is called on to participate at the individual, community and urban levels to promote the sustainability of urban communities. A dynamic abiotic, biological, and cultural (DABC) system framework was proposed to prioritize policy attributes through public participation and adaptation to changes in the planning environment and scenarios [79]. There is also the "AngelGREEN" Green Network Development Strategy for the 13th District of Budapest, which is not only a framework for prioritizing the company's management responsibilities but also intended to help to gradually increase public participation in green network planning and maintenance [80]. Drawing on the research results above, we believe that greenway managers can set up a project management committee composed of user representatives, government management departments, fitness and sports volunteer organizations, sports brand merchants and professional operation teams. In this way, government management departments can better respond to the needs of users, and participants from all parties can also participate in the organization, publicity, and supervision of activities to make the health-promotion-service provision of greenway trails more extensive and efficient. Research Limitations and Future Directions The shortcomings of this research and the future directions are the same as those of previous studies. This research still needs to be improved in the following ways. First, this paper took country greenways as an example. There is no comparative study on the health promotion capabilities of urban greenways and trails, which has certain limitations. Second, young people account for only a small proportion of the respondents; however, in reality, such people often travel with their families and are also important service objects of greenways [81,82]. Therefore, research should strengthen the promotion of healthy behaviours among young people. Third, the main factors considered in this study that affect the promotion of healthy behaviours are not yet comprehensive. Social factors, such as users' living habits, working environment, and educational level, as well as climatic factors, such as seasonal changes, may also have an impact. In subsequent research, the visual-aesthetical characteristics provided by the greenway should also be valued in terms of health promotion, such as the impact of dynamic changes in viewing angles or viewpoints on users' psychological feelings. Finally, we will gradually make up for the shortcomings of the above research and continue to conduct and improve the post-use survey and evaluation research of the entire QLG in terms of greenway intervention and health promotion, with the aim to provide further practical value for landscape planning and the design of similar greenways. Retail stores, bicycle rental, greenway relays S10 Public health Toilets, dustbins, hand sinks Natural environment S11 Terrain diversity Wetlands, hills, fields, lakes and mountains S12 Colourful plant landscape Metasequoia forest, wet plants, farmland, flower bushes S13 Scenery line Ridgeline, wood line, lakeshore line S14 Meteorological landscape Landscapes that change over time or seasons like sunrise, rain, or dusk S15 Biological landscape Birds, fish, squirrels and so on Regional cultural characteristics S16 Greenway theme reflects Green ecology, health or sports S17 Historical allusions Wuyue culture S18 Traditional construction techniques Seal cutting, carving, masonry technology S19 Application of local materials Bricks, stones, tiles and Bamboo S20 Sense of belonging Willing to be close, stay and enjoy Management and maintenance S21 Cleanliness Normal environmental cleanliness S22 Security guards patrol Security personnel, guardhouses, alarms S23 The facilities are fully equipped All facilities can be used normally S24 Plant conservation Whether the plant has pests and diseases and its growth status S25 Fire safety Fire-fighting facilities, emergency escape routes
v3-fos-license
2021-10-20T15:41:29.493Z
2021-01-01T00:00:00.000
239410200
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2021/07/bioconf_metc2021_00007.pdf", "pdf_hash": "e597b7d1465b283a36d561dc95c8288bff145f26", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1370", "s2fieldsofstudy": [ "Environmental Science", "Agricultural And Food Sciences" ], "sha1": "2d3a5fa475e2facc20d24a74196289939c94f10c", "year": 2021 }
pes2o/s2orc
Invasive species in the grasslands of the Central Caucasus Biological invasions and grassland transformation are significant problems in pasture ecosystems of the Central Caucasus. The aim was to study the main patterns of invasive processes in grasslands and included identification the main vegetation parameters and abiotic factors affecting the invasion and distribution of alien species (Erigeron annuus, Ambrosia artemisiifolia, and Xanthium albinum) in plant communities. We assessed vegetation parameters of steppe grasslands with the presence of alien species within 122 model plots on plains, in foothills and low mountains (250-1000 m above sea level). We also modeled the current habitats of the species in grasslands of the Central Caucasus by using the Maxent method. The most suitable for invasion and distribution of Erigeron annuus are productive grasslands (NDVI is 0.25 and more) of the foothills and low mountains with a moderately warm humid climate (average annual temperature is 5-10°C; precipitation of the most humid quarter is 240-300 mm). The most suitable for Ambrosia artemisiifolia are medium-productive grasslands (NDVI is 0.25-0.38) of the foothills and low mountains with low vegetation coverage (65-85%) and moderately humid climate (precipitation of the most humid quarter is 225-275 mm). The most suitable for Xanthium albinum at present are dry unproductive disturbed grasslands of plains. Introduction Biological invasions are one of the main significant environmental problems leading to degradation of grasslands globally [1][2][3][4]. Whereas earlier the grasslands were relatively resistant to invasion of alien species, nowadays climate warming, overgrazing and development of the road network significantly increased the risk of biological invasions in these environments. The problem is also relevant for the Central Caucasus, where there is an intense penetration of alien plant species into the lowland and mountain grasslands. Given the functioning in the region of the agricultural sector, biological invasions here can have severe socioeconomic consequences. The alien plant species that are widespread in the lowland and foothill regions of the Central Caucasus and currently penetrate into the grasslands include Erigeron annuus (L.) Pers., Ambrosia artemisiifolia L., and Xanthium albinum (Widd.) Scholz & Sukopp. Distribution of these species, as well as overgrazing, is a key factor in grassland degradation, which changes the forage value and the biodiversity of pastures. The native range of Erigeron annuus includes the eastern United States, where the species grows along weedy places, roadsides, and river banks, in meadows and steppes [5]. The first reliable information on the growth of Erigeron annuus in the Caucasus dates back to 1930s (weedy places of the coast of Abkhazia and Georgia [6]). The native range of Ambrosia artemisiifolia covers the eastern and southeastern United States and southern Canada. The species grows along roadsides, weedy places, and river banks, on agricultural land [7]. For the first time in the Caucasus, Ambrosia artemisiifolia was noted back in 1914 in the Krasnodar krai [8], from where it spread widely throughout the region. Xanthium albinum species is also of North American origin. It widely distributed on wetlands, agricultural land, and roadsides. The appearance time of Xanthium albinum in the Caucasus is not known for certain. Data on the patterns of expansion of the alien species into the grasslands are limited. According to our observations, the invasion of Erigeron annuus, Ambrosia artemisiifolia, and Xanthium albinum into the grasslands is associated with the transfer of seeds by transport traffic and disturbance of pasture due to overgrazing. The purpose of this study was to identify the main patterns of invasive processes in grasslands of the Central Caucasus. The objectives were a) to study the main parameters of vegetation affecting the distribution of alien species in plant communities; b) to identify the main abiotic factors that limit (or stimulate) the distribution of alien species in the study area. Study area We pursued the studies in the Central Caucasus (between 42°54'-44°01' N and 43°52'-43°03' E) within the elbrusskiy and terskiy variants of vertical zonation of the northern macroslope in the Central Caucasus in 2017-2021. The lack of broad-leaved forest belt and pronounced xerophytization of landscapes determine the peculiarities of the elbrusskiy variant of vertical zonation [9]. Its belt spectrum consists of meadow steppes (foreststeppes), steppe meadows, subalpine, alpine, subnival and nival belts. Mesophytization of landscapes is peculiar to the terskiy variant of vertical zonation. The broad-leaved forest belt, subalpine, alpine, subnival and nival belts are represented in its composition. The mountainous relief, altitude above sea level, and the arrival of western air masses from the Atlantic form a relatively cold and humid continental climate of the study area in mountain regions. The climate of the lowland regions is continental, relatively hot and dry. Data collection and measurements We established 112 model plots in steppe grasslands with the presence of alien species within the plains, foothills, and low mountain regions of the Central Caucasus from 250 m to 1000 m above sea level (42 model plots for Erigeron annuus, 40 model plots for Ambrosia artemisiifolia, and 30 model plots for Xanthium albinum). The model plots were located at the gentle slopes (5-20°), on river terraces, and on plain areas near or far from villages. The area of each plot was 900 m 2 . No alien plants were found in steppe midmountain pastures or subalpine and alpine high mountain grasslands of the study area. We visually assessed vegetation coverage and coverage of each species on model plots and expressed as a percentage (Table 1). Grass height (cm) was recorded by the average height of cereal leaves. Species richness represented the total number of species within each plot. We applied Shannon and Berger-Parker indices to calculate the alpha diversity (evenness) and degree of dominance of plant communities. Plant species nomenclature follows TPL [10]. We used multiple regression analysis in the Statistica 10 to identify the main vegetation and environmental parameters of the model plots that affect the coverage of alien species in grasslands. Shannon and Berger-Parker indices were calculated using Past 4.0. To identify abiotic factors limiting the distribution of Erigeron annuus, Ambrosia artemisiifolia, and Xanthium albinum in grasslands, we modeled the current habitats of the species in the Central Caucasus by using the Maxent method (Maxent software, v 3.4.1; linear/quadratic features type; 0.5 regularization multiplier). The Maxent method allows one to select the habitats similar to those in which the species was found by distributions of the properties of environment [11,12]. The habitats identified with the highest probability of detecting the species are the most suitable. The analysis includes GPS coordinates of 112 habitats of the species in grasslands (42 presence points for Erigeron annuus, 40 presence points for Ambrosia artemisiifolia, and 30 presence points for Xanthium albinum (Application)), identified during field research in 2017-2021. To obtain an adequate model, a calculation was carried out in five replicates for uniform distribution of test and training samples. WorldClim climate models, which include 19 bioclimatic variables [13], were used as a basis for interpolation of data on spatial distribution of the species. We also used measurements of reflected solar radiation from the Landsat 8OLI/TRS satellite and normalized difference vegetation indices (NDVI) calculated on the basis of the mosaic of the study area. Based on SRTM (Shuttle radar topographic mission), we calculated altitude and morphometric characteristics of relief: slope, exposure, various types of curvature, etc. [14,15]. Environmental layers were clipped to the study area at 30 m 2 resolution. The graphic representation of the research results was formulated in the maps of the species distribution in the PanoplyWin (v 5.9). Visualization of the probability of suitability of species habitats is carried out according to the ranked values of the standard Maxent palette in gradation of colors from blue (occurrence "0") to red (occurrence "1"). For potentially suitable grasslands for the species, probability values are acceptable from 0.5 to 0.8, for optimal habitats -above 0.8. Vegetation and environmental parameters affecting the coverage of alien species in grasslands We used grass height, vegetation coverage, species richness, Shannon and Berger-Parker indices, altitude above sea level (m), and steepness (°) in multiple regression analysis to identify the main vegetation and environmental parameters of model plots affecting the coverage of each studied species. We created three models. They were Ambrosia artemisiifolia model (Amodel), Erigeron annuus model (E-model), Xanthium albinum model (X-model). A-model explained approximately 74% of variation in independent variables at P < 0.001 significance level ( Table 2). Vegetation coverage was the most important variable according to it regression coefficient (b). It is followed by altitude above sea level with non-zero regression coefficient. Grass height, steepness, species richness, and Shannon and Berger-Parker indices were nonsignificant parameters (P > 0.05) in Ambrosia artemisiifolia coverage determination. Decrease in vegetation coverage with negative regression coefficient indicated an increase in the species distribution in grasslands of the Central Caucasus. Altitude above sea level displayed a positive relationship with the species coverage. Obviously, Ambrosia artemisiifolia has a greater distribution in foothills and low mountain regions (450-1000 m above sea level) than in plains (250-400 m) of the study area. E-model explained 70% of variation in independent variables at P < 0.05 significance level (Table 3). Altitude, Shannon and Berger-Parker indices were the only important variable affecting the coverage of Erigeron annuus in grasslands. Increase in altitude from plains to low mountain regions (250-950 m above sea level) indicated an increase in the species distribution in grasslands. The species expansion within the model plots accompanied by a decrease in alpha diversity (Shannon index) and an increase in degree of dominance (Berger-Parker index) of plant communities. The other vegetation and environmental parameters do not have any limiting or stimulating effect on the Erigeron annuus distribution in grasslands of the Central Caucasus (P > 0.05). X-model explained approximately 82% of variation in independent variables at P < 0.0001 significance level; the standard error of estimate (SSE) was quite low (Table 4). These characteristics implied that the model was significant. Vegetation coverage was the most important variable. It is followed by species richness, Berger-Parker index, grass height, altitude, and Shannon index with non-zero regression coefficients. Steepness was the only nonsignificant parameter (P > 0.05) in determination of Xanthium albinum coverage. Decreases in vegetation coverage, grass height, species richness, and Shannon index with negative regression coefficients indicated an increase in the species distribution in grasslands. At the same time, a decrease in these parameters characterizes an increase in level of grassland degradation. Berger-Parker index, which increased with increasing coverage of grazing-resistant secondary pasture dominants (Cynodon dactylon (L.) Pers., Elytrigia repens (L.) Nevski, Artemisia annua L., etc.), displayed a positive relationship with coverage of Xanthium albinum. The species has a greater distribution in the plains and foothills (250-400 m above sea level) than in low mountain regions (450-800 m) of the Central Caucasus. Spatial distribution of the alien species in grasslands Modeling of the modern habitat of Erigeron annuus in the grasslands of the Central Caucasus, using the MaxEnt method, confirmed the wide distribution of the species in this type of ecosistems from foothills to low mountain regions (Figure 1). Here was the largest number of possible locations of the species in grasslands, including with a probability above 80%. In midmountain and hight mountain regions, it is highly probable that suitable habitats for the species can be found at the road sides along the river valleys; less often, grasslands suitable for invasion of Erigeron annuus can be found at the slopes of mountains. The predicted number of grasslands potentially suitable and optimal for the species was also low on the plain regions. The range of Ambrosia artemisiifolia in the grasslands of study area mainly encompassed the foothill regions ( Figure 1). With a high probability (80-100%), the penetration of the species into the low mountain grasslands also is possible. On the plains of study region, grasslands suitable for Ambrosia artemisiifolia were represented mainly by relatively small area adjacent to the foothills. Penetration of the species into high-altitude grasslands is currently unlikely. The main distribution area of Xanthium albinum in the grasslands of the Central Caucasus covered a relatively large plain area, as well as the foothills and low mountains (Figure 1). In the midmountain regions, it is highly probable that species can be found in the grasslands along the settlements. The predicted number of potentially suitable grasslands is lowest in the high mountain regions. There were three main factors affecting the invasion of Erigeron annuus in the grasslands of study area, of which the climatic factor BIO16 (precipitation of the most humid quarter) made the largest contribution to the construction of the model; NDVI (normalized differential (relative) vegetation index in summer) and BIO1 (average annual temperature) had a lesser effect ( Table 5). For these parameters, the maximum values of the percentage contribution are noted. Analysis of the dependence of Erigeron annuus on the most significant climatic factors showed that grassland suitability for the species invasion decreased with precipitation of the most humid quarter of less than 240 mm and more than 300 mm, and an acceptable range of average annual temperature was from 5°C to 10°C (Figure 2). The values of NDVI suitable for growth of Erigeron annuus were quite high (at least 0.25). Thus, the spread of Erigeron annuus at present occurs mainly in productive grasslands of the foothill and low mountain regions with a moderately warm humid climate. Environmental parameters that determine the distribution of Ambrosia artemisiifolia in the grasslands of the Central Caucasus, there was only two climatic indicators -BIO16 (precipitation of the most humid quarter) and BIO13 (precipitation of the most humid month) ( Table 5). 12.3 15.2 AUC: area under the curve; PC (percentage contribution)contribution to construction of models (%); PI (permutation importance) -permutation coefficient (%); BIO13 and BIO16 is precipitation of the most humid month and most humid quarter; BIO14 and BIO17 is precipitation of the driest month and driest quarter (mm); BIO1 is average annual temperature (°C); NDVI is the normalized differential vegetative index in the summer. The main conditions for invasion and distribution of the species were a fairly narrow range of the precipitation of the most humid quarter and precipitation of the most humid month. The values of this factors were 225-275 mm and 85-100 mm, respectively ( Figure 2). The values of NDVI suitable for invasion of Ambrosia artemisiifolia varied within 0.25-0.38. More productive plant communities are not suitable for invasion of the species. Thus, the spread of Ambrosia artemisiifolia at present occurs mainly in medium-productive grasslands of foothill and some plain regions with a climate slightly more dry than suitable for Erigeron annuus. The main climatic parameters for Xanthium albinum distribution in the grasslands, according to Maxent models, there were two indicators -BIO14 and BIO17 (precipitation of the driest month and driest quarter) ( Table 5). Acceptable for invasion of the species, the range of precipitation of the driest month was 27-90 mm, and precipitation of the driest quarter on average should not exceed 90 mm (Figure 2). The values of NDVI suitable for invasion of the species varied within 0.15-0.46. Thus, factors that limit the distribution of Xanthium albinum in the grasslands of the Central Caucasus are precipitation and productivity of grasslands. This species can spread mainly in dry unproductive (disturbed) grasslands from plains to midmountain regions. Discussion The purpose of this study was to identify the main vegetation parameters and abiotic factors affecting the distribution of alien species in grasslands of the Central Caucasus. Our results supported previous reports that despite the fact that Ambrosia artemisiifolia, Erigeron annuus, and Xanthium albinum most common in ruderal sites, this species can colonize the meadow ecosystems [16][17][18][19]. Multiple regression analysis showed significant increase (P < 0.05) in coverage of Ambrosia artemisiifolia in grasslands with increasing altitude from plains to low mountain regions. The species has a greater distribution in medium-productive disturbed grasslands of foothills and low mountain regions whith relativly low vegetation coverage. These results are in line with previous studies, which observed that a strong competition for habitat resources might inhibit Ambrosia artemisiifolia growth and distribution, and the degree of habitat disturbance affects the abundance of the species [16,[20][21][22]. The authors recommended phytocenotic control of Ambrosia artemisiifolia distribution by inducing dominance of perennial native species [20][21][22]. The most suitable for the species invasion grasslands in the Central Caucasus have moderate humidification (225-275 mm during the summer). There is controversy over what precipitation regime is more suitable for the species growth and distribution. According to Essl et al. [23], in temperate European climates, Ambrosia artemisiifolia prefers dry soils. On the other hand, Mang et al. [24] reported that in central Europe, the suitability for the species increased with increasing precipitation (to 557 mm from April to October), which in line whith our results. We revealed the significant effect of altitude, precipitation, and temperature on the distribution of Erigeron annuus in grasslands. The species can form up to 80% of vegetation coverage in productive grasslands of foothill and low mountain regions (450-950 m above sea level) with a moderately warm humid climate. Trtikova et al. [25] also reported that from 400 m above sea level to the altitudinal limit of Erigeron annuus in Switzerland (1000 m), the plants survived and grew vigorously during the growing season. The authors observed that the climate warming might promote the upward range expansion of the species by reducing winter mortality of seedlings and increasing reproductive productivity in mountain regions [25]. Limited distribution of Erigeron annuus in the plain regions can be probably explained by more pronounced continental climate of plains in the study area, where high value of average annual temperature (over 10°С) is observed. The level of competition from native species, which is characterized by vegetation coverage, didn't affect the species coverage in the studied grasslands. The species intensively spread within undisturbed grasslands and its distribution accompanied by a decrease in diversity and an increase in degree of dominance of plant communities. Kudryavtseva et al. [26] noted that the seeds of Erigeron annuus inhibited the development of seedlings in a number of native grass species. Cai et al. [27] also demonstrated the high individual-based competitive ability of Erigeron annuus. Liu et al. [28] concluded that the invasion of Erigeron annuus is harmful to species diversity of grass ecosystems in region of Wuling mountain. The importance value of the species in vegetation coverage and the species richness of model plots showed the significant negative relationship [28]. Similar results were obtained by Wang et al. [29] for plant communities of East China. However, according to Trtikova [30], the species was more able to tolerate competition at low altitudes, but this factor hinder the reproduction and distribution of Erigeron annuus at high altitudes, which corresponds to our results. Analysis of Xanthium albinum range within the grasslands of the Central Caucasus showed that the dry unproductive disturbed pastures of plains and foothills (250-400 m above sea level) represent the main region of current distribution of the species. Nagornaya [31] also demonstrated that the degree of disturbance of grass ecosystems was one of the most important factors in the distribution of this species in Kursk. Conclusions Our study showed that Ambrosia artemisiifolia, Erigeron annuus, and Xanthium albinum can colonize the grassland ecosystems of the Central Caucasus. The range of Ambrosia artemisiifolia and Erigeron annuus in the grasslands of study area mainly encompassed the foothill and lowland regions, while the most common distribution regions of Xanthium albinum are the plains and foothills. The main factors affecting the invasion and distribution of Erigeron annuus in grasslands of the Central Caucasus are altitude, precipitation of the most humid quarter, normalized differential vegetation index in summer, and average annual temperature. The most suitable for the species at present are mainly productive grasslands of the foothills and low mountains with a moderately warm humid climate. This competitive species can form up to 80% of vegetation coverage, and its distribution accompanied by an increase in degree of dominance and a decrease in diversity of plant communities. The main factors affecting the invasion and distribution of Ambrosia artemisiifolia in grasslands are also altitude, precipitation (in the most humid quarter and month), and normalized differential vegetation index in summer. Important factor is also vegetation coverage that characterizes the level of competition from native species. The most suitable for the species at present are medium-productive grasslands of foothills and low mountains with relatively low vegetation coverage (65-85%) and moderately humid climate slightly more dry than suitable for Erigeron annuus. The main factors affecting the invasion and distribution of Xanthium albinum in grasslands are precipitation of the driest month and driest quarter, normalized differential vegetation index in summer, and altitude. Affecting factors are also vegetation parameters characterizing the level of grassland degradation: vegetation coverage, species richness, grass height, Berger-Parker and Shannon indices. The most suitable for the species at present are dry unproductive disturbed grasslands of plains with relatively low vegetation coverage and grass height, species richness and species diversity, and high degree of dominance of grazingresistant secondary pasture dominants. Thus, while the distribution of Ambrosia artemisiifolia and Xanthium albinum depends largely on anthropogenic disturbance of grasslands, the distribution of Erigeron annuus is limited mainly by climatic conditions -temperature and precipitation. The climate warming might contribute to the expansion of Erigeron annuus in the midmountains and high mountains of the Central Caucasus with the transformation of large areas of steppe and subalpine meadows. The studies were carried out as part of state assignment no. 075-00347-19-00 on the topic "Patterns of the Spatiotemporal Dynamics of Meadow and Forest Ecosystems in Mountainous Areas (Russian Western and Central Caucasus)."
v3-fos-license
2018-04-03T00:55:05.667Z
2015-06-23T00:00:00.000
794782
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/290/32/19833.full.pdf", "pdf_hash": "72235740da08963e5e297f7221fc3717e01614a3", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1371", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "567d80b2b48229d26d2c13962f83808f70d36530", "year": 2015 }
pes2o/s2orc
Sialic Acids on Varicella-Zoster Virus Glycoprotein B Are Required for Cell-Cell Fusion* Background: Myelin-associated glycoprotein (MAG) mediates varicella-zoster virus (VZV) infection by associating with glycoprotein B (gB). Results: Analyses of glycans on VZV gB revealed that sialic acids (SAs) on gB are required for membrane fusion and infection against MAG-expressing cells. Conclusion: SA-containing glycans on gB are necessary for VZV membrane fusion. Significance: The role of SAs during VZV infection is elucidated. Varicella-zoster virus (VZV) is a member of the human Herpesvirus family that causes varicella (chicken pox) and zoster (shingles). VZV latently infects sensory ganglia and is also responsible for encephalomyelitis. Myelin-associated glycoprotein (MAG), a member of the sialic acid (SA)-binding immunoglobulin-like lectin family, is mainly expressed in neural tissues. VZV glycoprotein B (gB) associates with MAG and mediates membrane fusion during VZV entry into host cells. The SA requirements of MAG when associating with its ligands vary depending on the specific ligand, but it is unclear whether the SAs on gB are involved in the association with MAG. In this study, we found that SAs on gB are essential for the association with MAG as well as for membrane fusion during VZV infection. MAG with a point mutation in the SA-binding site did not bind to gB and did not mediate cell-cell fusion or VZV entry. Cell-cell fusion and VZV entry mediated by the gB-MAG interaction were blocked by sialidase treatment. N-glycosylation or O-glycosylation inhibitors also inhibited the fusion and entry mediated by gB-MAG interaction. Furthermore, gB with mutations in N-glycosylation sites, i.e. asparagine residues 557 and 686, did not associate with MAG, and the cell-cell fusion efficiency was low. Fusion between the viral envelope and cellular membrane is essential for host cell entry by herpesviruses. Therefore, these results suggest that SAs on gB play important roles in MAG-mediated VZV infection. Varicella-zoster virus (VZV) is a member of the human Herpesvirus family that causes varicella (chicken pox) and zoster (shingles). VZV latently infects sensory ganglia and is also responsible for encephalomyelitis. Myelin-associated glycoprotein (MAG), a member of the sialic acid (SA)-binding immunoglobulin-like lectin family, is mainly expressed in neural tissues. VZV glycoprotein B (gB) associates with MAG and mediates membrane fusion during VZV entry into host cells. The SA requirements of MAG when associating with its ligands vary depending on the specific ligand, but it is unclear whether the SAs on gB are involved in the association with MAG. In this study, we found that SAs on gB are essential for the association with MAG as well as for membrane fusion during VZV infection. MAG with a point mutation in the SA-binding site did not bind to gB and did not mediate cell-cell fusion or VZV entry. Cell-cell fusion and VZV entry mediated by the gB-MAG interaction were blocked by sialidase treatment. N-glycosylation or O-glycosylation inhibitors also inhibited the fusion and entry mediated by gB-MAG interaction. Furthermore, gB with mutations in N-glycosylation sites, i.e. asparagine residues 557 and 686, did not associate with MAG, and the cell-cell fusion efficiency was low. Fusion between the viral envelope and cellular membrane is essential for host cell entry by herpesviruses. Therefore, these results suggest that SAs on gB play important roles in MAG-mediated VZV infection. The human herpesvirus family comprises eight viruses, including herpes simplex virus (HSV) 3 and varicella-zoster virus (VZV). VZV causes varicella (chicken pox) in most children, zoster (shingles) in adults or immunocompromised hosts, as well as encephalomyelitis and cranial neuritis (1)(2)(3). Latently infected VZV may be observed in the sensory ganglia, and it is reactivated in an immunocompromised state (4). Therefore, it is important to elucidate the mechanism employed by VZV to infect nerve tissues. Membrane fusion between the viral envelope and cellular membrane is an essential process for enveloped viruses, such as herpesviruses (5,6), when entering host cells (7)(8)(9). Therefore, elucidating the mechanism of membrane fusion is important for understanding the entry mechanism of enveloped viruses. Viral fusion proteins associate with membrane proteins on their host cells to induce membrane fusion in enveloped viruses. Viruses that belong to Herpesviridae typically require glycoprotein B (gB) and the gH-gL complex for membrane fusion. In the case of HSV, in addition to gB and the gH-gL complex, the association of HSV gD with nectin-1,2 and the herpesvirus entry mediator on host cells is essential for membrane fusion. On the other hand, VZV does not have gD, unlike HSV, and gB and the gH-gL complex are minimum components for VZV membrane fusion, as we have reported previously (10). In VZV, the cell surface receptors that mediate membrane fusion by associating with envelope proteins have been unclear for a long time. However, we demonstrated that myelin-associated glycoprotein (MAG) binds to VZV gB, where this interaction mediates membrane fusion during VZV infection (10). MAG belongs to the sialic acid (SA)-binding Ig-like lectin (Siglec) family, and it is also known as Siglec-4. Most Siglecs are expressed on hematopoietic cells, but the expression of MAG is restricted to neural tissues. It has been suggested that Siglec family molecules recognize their ligands in an SA-dependent manner (11,12). However, it has been reported that MAG can associate with the Nogo-66 receptor (NgR) in an SA-independent manner (13)(14)(15) or an SA-dependent manner (16). Although MAG also binds to the mouse paired immunoglobulin receptor B, human leukocyte Ig-like receptor B2 (LILRB2), and ␤1-integrin, it has remained unclear whether SAs are required for MAG binding to these ligands (17,18). In addition, MAG associates with fibronectin and some gangliosides, such as GD1a and GT1b, in an SA-dependent manner (19 -21). These observations suggest that the requirement for SAs to allow MAG to associate with its ligands appears to differ depending on the nature of the ligands (22,23). Therefore, it is important to clarify whether SA is required for VZV infection mediated by the MAG-gB interaction. It has been known that SAs on host cells are targets for various viruses to attach to the host cells during infection (24). Particularly, some viruses, such as the influenza virus, use SAs as their entry receptors. On the other hand, most of the glycoproteins produced in mammalian cells are modified with SAs. Viruses utilize the host cell systems to synthesize their components, so viral envelope glycoproteins are also modified with SAs (24). For example, the SAs on HSV envelope glycoproteins are involved in viral infection. Sialidase treatment of HSV decreasesitsinfectivity (25).Furthermore,mutationsoftheO-glycosylation sites in HSV gB abrogate the interaction with paired immunoglobulin-like type 2 receptor ␣, one of the entry receptors for HSV (26). VZV gB, the gE-gI complex, and the gH-gL complex are also sialylated (27)(28)(29)(30)(31)(32). However, the specific SAs on VZV envelope proteins that are involved in membrane fusion and infection remain unclear. Therefore, it is important to clarify whether SAs are required for the MAG-gB interaction during membrane fusion and infection of VZV. Cell-cell fusion assays are powerful virus-free models for examining the machinery that is essential for membrane fusion during viral entry because it is not necessary to consider the effects of other viral components (33)(34)(35)(36)(37). Cell-cell fusion assays using gB, gH, gL, and MAG are the only systems available for assaying the function of VZV membrane fusion. In this study, we used these assays to investigate the roles of SAs on VZV gB during infection and performed viral infection experiments. Sialidase Treatment-The cell lysates used for Western blotting analysis and the cells utilized for analyzing the association with WT-MAG-Ig were treated with sialidase at 37°C for 10 and 30 min. VZV was treated for 10 min in the infection analysis and for 30 min in the VZV binding assay. Treatment with Glycosylation Inhibitors-293T cells were cultured in medium containing tunicamycin, DNJ, or benzyl-␣-GalNac and then transfected with WT-gB or mock-transfected. 24 h after transfection, the cells were stained with WT-MAG-Ig or anti-gB mAb (SG2), followed by flow cytometry analysis. Cell Lines and Viruses-Human melanoma MeWo cells (provided by Dr. A. M. Arvin, Stanford University), 293T cells (purchased from Riken), Plat-E cells (provided by Dr. T. Kitamura, The University of Tokyo), and human oligodendroglial cell line (OL cells) (provided by Dr. K. Ikuta, Biken) were cultured in DMEM (Nacalai Tesque). All cells were cultured at 37°C in 5% CO 2 in medium supplemented with 10% FCS, 100 units/ml penicillin, 100 g/ml streptomycin, and 50 M 2-mercaptoethanol. The VZV Oka strain and a recombinant Oka strain carrying the GFP reporter gene (GFP-VZV, provided by Dr. A. M. Arvin, Stanford University) were used in the VZV infection analyses (38). Because GFP is expressed via the CMV promoter in the recombinant VZV, the virus particle itself does not contain GFP. The cell-free virus was prepared from VZV-infected MeWo cells by harvesting cells with 0.25% EDTA (2.5 ml/100-mm dish of infected cells) and resuspending the harvested cells in 0.4 ml of SPGA buffer (pH 8.0) (218 mM sucrose, 3.8 mM KH 2 PO 4 , 4.9 mM sodium glutamate, and 1% bovine serum albumin). The suspended cells were sonicated twice on ice for 15 s with a 1-min interval, followed by centrifugation at 12,000 ϫ g for 5 min at 4°C. The resulting supernatant was passed through a 0.45-m filter and stored at Ϫ80°C. The frozen supernatant was thawed immediately before use as the cellfree virus. The viral titers were determined using MAG-transfected OL cells. MeWo cells, cultured at a density of 2 ϫ 10 5 cells/well in 24-well tissue culture plates, were infected with GFP-VZV in a cell-associated manner and cultured with N-and O-glycosylation inhibitors simultaneously. Culture supernatants were collected after more than 80% of cells showed cytopathic effect. Each supernatant, after the removal of cells and debris, was used as a cell-free virus. Plasmids-The plasmids encoding human wild-type MAG (WT-MAG), WT-MAG-Ig, VZV gB, gH, gL, and gB-Ig have been described previously (10). Plasmids for the mutated MAG and MAG-Ig fusion protein, i.e. the mutation of arginine at position 118 to alanine (R118A-MAG and R118A-MAG-Ig, respectively), were engineered using a QuikChange site-directed mutagenesis kit (Agilent Technologies) and a primer pair (sense, 5Ј-GGGAAGTACTACTTCGCTGGGGACCTG-GGCGGC-3Ј; antisense, 5Ј-GCCGCCCAGGTCCCCAGCG-AAGTAGTACTTCCC-3Ј). The gB mutants were cloned by recombinant PCR using the WT-gB plasmid as a template as follows: cloning the upper portion using a primer pair (sense, IO2045 5Ј-aataatGAATTCCACCatgtccccttgtggct-3Ј; antisense, each antisense primer substituting Ser/Thr or Asn with Ala (Figs. 4 and 6); cloning the lower portions using a primer pair (sense, each sense primer substituting Ser/Thr or Asn with Ala (Figs. 4 and 6); antisense IO3230, 5Ј-aataatctcgagttacacccccgttacat-3Ј); and cloning the full-length gB with a mutation using the upper and lower portions as templates with the primer pair IO2045 and IO3230. The mutated gB was inserted into the pCAGGS-MCS vector at the EcoRI and XhoI sites. A plasmid expressing the extracellular domain of gB fused with the glycosylphosphatidylinositol (GPI) anchor of decay-accelerating factor (CD55) was cloned by recombinant PCR as fol-lows: cloning the upper portion using a primer pair (sense, IO2045; antisense, 5Ј-tttggggttgtttcatgaaaCTCGAGcccaaatgggttagataaaa-3Ј) with the WT-gB plasmid as a template; cloning the lower portion using a primer pair (sense, 5Ј-ttttatctaaccca-tttgggCTCGAGtttcatgaaacaaccccaaa-3Ј; antisense, IO3025 5Ј-aataatGTCGACctaagtcagcaagcccatgg-3Ј) with human peripheral blood mononuclear cell cDNA as a template; the upper and lower portions were connected with IO2045 and IO3025. WT-gB-GPI was digested with the restriction enzymes EcoRI and SalI and inserted into the pCAGGS-MCS vector at the EcoRI and XhoI sites. The extracellular domain of gB (N147A, T129A, and S559A) was cloned from the full-length gB (N147A, T129A, and S559A) as described above using a primer pair (sense, IO2045; antisense, aataatCTCGAGaaatgggttagataaaaa). The extracellular domain of WT-gB in WT-gB-GPI inserted into pCAGGS-MCS was replaced by the extracellular domain of gB (N147A, T129A, and S559A) using the restriction enzymes EcoRI and XhoI. Ig Fusion Protein-The plasmids for Ig fusion proteins were constructed as described above. 293T cells were transfected transiently with expression vectors for Ig fusion proteins, and the culture supernatants were collected. The empty Ig fusion protein containing the signal peptide of mouse signaling lymphocyte activation molecule (SLAM, CD150) and the human IgG1 Fc portion was used as a control (39). Transfection-Plasmid DNA (0.8 g in 50 l of Opti-MEM (Life Technologies) mixed with polyethylenimine "Max" (molecular mass, 25,000) (Polysciences) (4 g in 50 l of Opti-MEM) was incubated at room temperature for 20 min. This diluent was transfected into 293T cells at a density of 2 ϫ 10 5 cells/well in 24-well tissue culture plates. Stable transfectants that expressed human MAG or R118A-MAG were generated with a retroviral transfection system using pMxs-puro retroviral vector Plat-E packaging cells and puromycin (1 g/ml, Nacalai Tesque), as described previously (40,41). Flow Cytometry Analysis-Cells were incubated with Ig fusion proteins or primary mAbs, followed by staining with allophycocyanin-conjugated anti-human IgG or anti-mouse IgG Ab (Jackson ImmunoResearch Laboratories) before analysis with a flow cytometer (FACSCalibur, BD Biosciences). MAG, R118A-MAG, gB, gB mutants, or mock were transfected with GFP into 293T cells. GFP ϩ cells were analyzed as transfected cells using CellQuest Pro (BD Biosciences). In the analysis of VZV binding, cells were incubated with cell-free VZV (Oka strain) suspension for 30 min on ice, followed by washing and staining with anti-gB mAb and allophycocyanin-conjugated anti-mouse IgG antibody. Viral Infection-Cell-free VZV infection was analyzed by mixing 2 ϫ 10 4 cells with various amounts of cell-free VZV in 96-well tissue culture plates. The plates were centrifuged at 2500 rpm for 2 h at 32°C, followed by 24-h culture. The cells were analyzed by flow cytometry or by fluorescence microscopy (Carl Zeiss). Photographs were obtained using a D3 digital camera (Nikon), and the images were then processed using Canvas software (ACD Systems). GFP ϩ cells were detected as infected cells. Cell-Cell Fusion Assay-Plasmids coding VZV-gB, VZV-gH, and VZV-gL as well as a plasmid encoding T7 RNA polymerase (pCAGT7) were cotransfected into 293T cells, and the transfectants were used as effector cells. A plasmid coding WT-MAG, R118A-MAG, or mock as well as a plasmid carrying the firefly luciferase gene under the control of the T7 promoter (pT7EMCLuc) was cotransfected into 293T cells, and the transfectants were used as target cells (42). As an internal control, the Renilla luciferase gene driven by the SV40 promoter (pRL-SV40, Promega) was also cotransfected into the effector cells or target cells. 24 h after transfection, the effector cells (4 ϫ 10 4 cells) were cocultured with target cells (4 ϫ 10 4 cells) in 96-well tissue culture plates for 18 h, and the efficiency of cell-cell fusion was quantified using a Dual-Luciferase reporter assay system (Promega) and luminometer (TriStar LB941, Berthold), as reported previously (10,42). Relative firefly luciferase activity was calculated as follows: (firefly luciferase activity / Renilla luciferase activity) ϫ 100) / maximum (firefly luciferase activity / Renilla luciferase activity). The cells were transfected with VZV glycoproteins and cultured with medium containing a combination of tunicamycin, DNJ, or benzyl-␣-GalNac. Thereafter, VZV glycoproteins-transfected effector cells were cocultured with 293T target cells transfected with MAG in the presence of respective inhibitors. In the other assay, effector cells transfected with VZV glycoproteins were treated with sialidase for 30 min before coculture with target cells. Thereafter, effector cells were cocultured with target cells in the presence of sialidase. Significant differences between the results were determined using Student's t test or one-way analysis of variance (each significant p value is shown in the figures), where p Ͻ 0.05 was considered significant. Metabolic Labeling-293T cells transfected with WT-gB, mutant gBs, or mock were cultured in DMEM containing 50 M N-azidoacetylmannosamine (ManNAz) (Life Technologies) for 48 h. The cells were collected and incubated with PBS containing 1% FCS and phosphine conjugated with biotin (250 M) (Cayman) at room temperature for 1 h. The cells were stained with streptavidin conjugated with allophycocyanin and subjected to flow cytometry analysis. Results SAs on gB Are Required to Interact with MAG-To analyze the role of SAs in the interaction between VZV gB and MAG, we first determined whether gB contains SAs in VZV-infected and gB-transfected cells. The cell lysates of VZV-infected and gB-transfected cells were treated with sialidase or mock, and the molecular weight of gB was determined by SDS-PAGE. We found two forms of gB with different apparent molecular weights in VZV-infected cells using anti-gB mAb (Fig. 1A). This result was consistent with a previous study where the immunoprecipitant obtained from the lysate of VZV-infected cells with anti-gB mAb contained 140-and 124-kDa gB molecules (29). The molecular weight of the 140-kDa gB treated with sialidase was significantly lower than that of the mock-treated gB (Fig. 1A). The 124-kDa band also shifted to a molecular mass lower than 100 kDa. These data suggest that both forms of gB are sialylated. At least, these results also suggest that the gB in VZV-infected and gB-transfected cells was sialylated. Next we used flow cytometry to examine whether the SAs on gB are required for the association with MAG. The MAG-Ig fusion protein that comprised the MAG extracellular domain and the Fc fragment of human immunoglobulin bound to the gB-transfectants, as reported previously (Fig. 1B, top panel) (10). MAG-Ig binding to the gB transfectants was decreased by sialidase treatment. On the other hand, MAG-Ig bound to the mock-transfected cells weakly. The binding was also decreased by sialidase treatment. The analysis using anti-gB mAb demonstrated that the cell surface expression level of gB was not changed by sialidase treatment (Fig. 1B, bottom panel). These results suggest that SAs are required for gB to associate with MAG and that gB possesses certain sialylated glycan structures that are preferentially recognized by MAG compared with other molecules on 293T cells. We then examined whether sialylated gB is involved in VZV entry into MAG-expressing cells. MAG-transfected oligodendrocytes (OL cells) were exposed to cell-free recombinant VZV carrying the GFP reporter gene (GFP-VZV). GFP-VZV virions were treated with sialidase or mock-treated before infection. The proportion of VZV-infected cells among the MAG-transfectants decreased by 50% after sialidase treatment compared with mock treatment (Fig. 1C). Indeed, virion binding to MAGexpressing cells was also decreased by treatment with sialidase (Fig. 1D). Therefore, the SAs in VZV appear to be involved in viral binding and entry. However, the treatment of virions, even mock treatment, for more than 15 min resulted in no infectivity (data not shown). Therefore, we treated VZV virions just with sialidase for 10 min. Sialidase treatment of VZV virions significantly inhibited VZV infection compared with mock treatment, although the inhibition was incomplete. MAG-Ig binding to gB-transfected cells was not decreased completely with sialidase treatment for 10 min, although MAG-Ig binding to gB-expressing cells was almost completely decreased after sialidase treatment for 30 min (Fig. 1E). Therefore, the incomplete inhibition of VZV infection by sidalidase treatment for 10 min seems to be due to the insufficient removal of SAs from VZV. . Cell lysates were analyzed by non-reducing SDS-PAGE after treatment with (ϩ) or without sialidase (Ϫ). Proteins were blotted with anti-gB mAb. B, mock-or gB-transfected 293T cells were treated with sialidase (bold line) or vehicle (thin line) and stained with MAG-Ig or anti-gB mAb, followed by flow cytometry analysis. Cells were stained with secondary antibody only (gray area). C, OL cells stably transfected with WT-MAG were exposed to GFP-VZV at a multiplicity of infection of 0.2 for 24 h. Fold-changes in the infection rate are shown. Fold-changes in the infection rate were calculated by dividing the percentage of GFP ϩ cells in each indicated treatment by the percentage of GFP ϩ cells infected with mock-treated VZV (%GFP ϩ cells among the total cells exposed to mock-or sialidase-treated VZV/ %GFP ϩ cells among the total cells exposed to mock-treated VZV). Representative data from three independent experiments are shown. The error bars represent the mean Ϯ S.D. on the basis of triplicate samples, and the p values were calculated using Student's t test. D, 293T cells transfected with MAG and mock-transfected were incubated with VZV virions after mock treatment (thin lines) or treatment with sialidase for 30 min (bold lines), respectively, followed by staining with anti-gB mAb and secondary antibody. Cells were also stained with antibodies only (gray area). E, 293T cells transfected with gB were stained with MAG-Ig after treatment with sialidase for 0 (gray area), 10 (thin line), and 30 min (bold line). Cells were also stained with secondary antibodies only (dotted line). F, 293T effector cells transfected with gH, gL, T7 polymerase, and Renilla luciferase as an internal control as well as gB (gBgHgL) or mock-transfected (gHgL) were cocultured with other 293T target cells transfected with WT-MAG and firefly luciferase driven by the T7 polymerase promoter. gB-transfected cells were treated with sialidase or mock-treated. The relative fusion efficiencies are shown on the basis of representative data from three independent experiments. The error bars represent the mean Ϯ S.D. on the basis of triplicate samples, and p values were calculated using Student's t test. Therefore, we also examined the involvement of SAs in membrane fusion using a cell-cell fusion assay. 293T cells were transfected with gB, gH, and gL and then treated with sialidase for 30 min before coculture. Thereafter, the sialidase-treated effector cells were cocultured with 293T target cells transfected with MAG in the presence of sialidase. Cell-cell fusion was significantly but incompletely inhibited by sialidase (Fig. 1F). These observations suggested that the SAs appeared to be involved in cell-cell fusion. In addition, newly synthesized and sialylated gB on effector cells might have mediated weak cellcell fusion, even in the presence of sialidase, although there is a possibility that sialidase activity was not enough to remove SAs completely. Arg-118 in MAG Plays an Important Role in the Association with gB-Furthermore, we employed an alternative approach to determine whether the SAs on VZV are involved in the infection process. Siglecs possess a conserved Arg residue that is essential for the recognition of SAs. Indeed, Arg-118 is known to be essential for MAG to associate with SA-containing molecules (11). To investigate whether Arg-118 is involved in the interaction between MAG and gB, we generated a mutated MAG-Ig fusion protein by mutating Arg-118 to Ala (R118A-MAG-Ig). As shown in Fig. 2A, gB-transfected 293T cells were stained strongly with wild-type MAG-Ig (WT-MAG-Ig) but not with R118A-MAG-Ig or control Ig. Furthermore, when the cell lysates of melanoma cells infected with VZV were immunoprecipitated with WT-and R118A-MAG-Ig fusion proteins, gB was precipitated with WT-MAG-Ig but not with R118A-MAG-Ig (Fig. 2B). By contrast, 293T cells transfected with WT-MAG were stained with gB-Ig, but 293T cells transfected with R118A-MAG were not stained with gB-Ig (Fig. 2C). R118A-MAG transfectants also showed less binding to VZV virions than WT-MAG transfectants (Fig. 2D). These results suggest that Arg-118 in MAG is essential for the association between MAG and VZV. Arg-118 in MAG Is Required for MAG-mediated Cell-Cell Fusion and VZV Entry-Membrane fusion is necessary for VZV entry into host cells, and the interaction between MAG and gB mediates membrane fusion during VZV infection (10). Using a cell-cell fusion assay, we analyzed whether the SA-dependent recognition of gB by MAG is involved in membrane fusion. The WT-MAG-transfected cells exhibited efficient cell-cell fusion with cells transfected with VZV envelope glycoproteins (gB, gH, and gL), whereas R118A-MAG-transfected and mocktransfected cells exhibited little fusion (Fig. 3A). Next, we examined whether MAG recognition of sialylated gB is involved in VZV infection of MAG-expressing cells. WT-MAG-, R118A-MAG-, or mock-transfected oligodendrocytes were exposed to GFP-VZV. As shown in Fig. 3B, the expression levels of R118A-MAG on cell surfaces were comparable with that of WT-MAG when they were analyzed using an anti-MAG monoclonal antibody. As expected, both the R118A-MAG transfectants and mock transfectants were resistant to GFP-VZV, whereas the WT-MAG transfectants were infected efficiently with GFP-VZV when the GFP expression levels in the infected cells were analyzed by fluorescence microscopy (Fig. 3C). Similar results were obtained by flow cytometry. The proportions of infected cells among WT-MAG-transfected cells increased in a virus dose-dependent manner but not those among R118A-MAG-transfected cells or mock-transfected cells (Fig. 3D). These results suggest that MAG recognition of SA on gB is required for membrane fusion during VZV infection of MAG-expressing cells. Both N-and O-glycosylation of gB Are Required for the Association of gB with MAG and Cell-Cell Fusion-It is known that SAs typically exist at the termini of N-or O-linked glycans or glycosphingolipids in mammalian cells (43). We predicted the N-or O-glycosylation sites on VZV gB using the NetNGlyc 1.0 server and NetOGlyc 4.0 server. The extracellular domain of gB was predicted to have seven N-glycosylation sites and 17 O-gly- AUGUST 7, 2015 • VOLUME 290 • NUMBER 32 JOURNAL OF BIOLOGICAL CHEMISTRY 19837 cosylation sites (Fig. 4A). gB-transfected 293T cells were treated with inhibitors of N-glycan synthesis, tunicamycin and DNJ, or an inhibitor of O-glycan synthesis, benzyl-␣-GalNac. The gB-transfectants treated with tunicamycin and DNJ associated less efficiently with WT-MAG-Ig compared with mocktreated cells (Fig. 4B). Treatment with benzyl-␣-GalNac also decreased the binding of WT-MAG-Ig to gB-transfected cells (Fig. 4B). WT-MAG-Ig also exhibited a lower affinity for mock transfectants after treatment with benzyl-␣-GalNac than after mock treatment. These results suggest that WT-MAG-Ig recognizes not only sialylated N-glycans on gB but also sialylated O-glycans on certain molecules, including gB. In addition, benzyl-␣-GalNac also seems to affect glycan structures on certain MAG-binding molecules of 293T cells. Using a cell-cell fusion assay, we then analyzed whether SAs on the N-and O-glycans of gB-transfected cells were involved in membrane fusion with MAG-expressing cells. 293T effector Cells transfected with VZV glycoproteins or mock-transfected were incubated with a medium that contained combination of tunicamycin, DNJ, benzyl-␣-GalNac, or mock before subsequent coculture with MAG-expressing cells. Thereafter, effector cells were cocultured with target cells in the presence of inhibitors. The treatment with tunicamycin, DNJ, or benzyl-␣-GalNac decreased fusion efficiency compared with mock treatment (Fig. 4C). Furthermore, culture with medium containing N-glycosylation inhibitors along with benzyl-␣-GalNac showed lower fusion efficiencies than culture with medium containing N-glycosylation inhibitors or O-glycosylation inhibitor (Fig. 4C). VZV produced by cells cultured in the presence of N-glycosylation and/or O-glycosylation inhibitors showed a lower infectivity to MAG-expressing cells than VZV produced by cells cultured in the absence of these inhibitors (Fig. 4D). The infectivity of VZV derived from cells incubated with either of the two N-glycosylation inhibitors along with benzyl-␣-GalNac was not significantly different from the infectivity of VZV derived from cells treated with either of these inhibitors (Fig. 4D). These results suggest that SAs capping N-and O-glycans of gB are involved in cell-cell fusion and VZV infection mediated by MAG. Mutation of Putative O-glycosylation Sites on gB Does Not Significantly Affects Cell-Cell Fusion-It is important to identify specific amino acid residues or domains of envelope glycoproteins that contribute to membrane fusion to elucidate the mechanism of viral entry. We investigated which of the seven N-glycosylation sites or 17 O-glycosylation sites are required for cell-cell fusion with cells that express MAG. We generated gBs where Asn residue or Ser/Thr residues were substituted with Ala at each putative N-glycosylated or O-glycosylated site. As shown in Fig. 5, cells that expressed gBs with mutations of putative O-glycosylation sites exhibited comparable fusion efficiency as WT-gB-expressing cells. We performed cell-cell fusion assays using gBs where other Ser/Thr residues were substituted with Ala. We thought that the SAs on O-glycans might be involved in the recognition of gB by MAG in the same way that SAs on the O-glycans of HSV gB are required for PILR␣ to bind to HSV gB (26,44). We selected Ser/Thr residues in gB that were predicted to have enhancement value product values of Ͼ2.00 using the ISOGlyP server. However, the fusion efficiency of each mutated gB-transfected cell type was not decreased significantly compared with that of the WT-gBtransfected cells, except for cell types transfected with mutants of Thr-129, Thr-265, and Ser-559 (Fig. 6A). The cell surface expression of gBs where Thr-129 or Ser-559 was substituted with Ala (T129A-gB or S559A-gB) was impaired severely (Fig. 6B). We expressed T129A-gB, S559A-gB, and WT-gB as GPIanchored forms (T129A-gB-GPI, S559A-gB-GPI, and WT-gB-GPI) to express them equally on the cell surface. The expression levels of T129A-gB-GPI and S559A-gB-GPI were comparable with the expression level of WT-gB-GPI, and the interaction between MAG-Ig and S559A-gB-GPI was similar to the interaction with WT-gB-GPI (Fig. 6B). T265A-gB was expressed on the transfected cell surface as well as WT-gB. MAG-Ig binding to T265A-gB-transfectants was also as efficient as MAG-Ig binding to WT-gB-transfectants (Fig. 6B). Therefore, Thr-129, Thr-265, and Ser-559 did not appear to be involved in cell-cell fusion mediated by binding to MAG. Overall, none of the O-glycosylation site-mutated gBs exhibited significantly decreased fusion efficiencies compared with WT-gB. Asn-557 and Asn-686 in gB Are Involved in Cell-Cell Fusion Mediated by the Association with MAG-Among Cells that expressed gB with mutations in putative N-glycosylation sites, the binding efficiencies to MAG-Ig of cells that expressed gB with a mutation in Asn-147, Asn-557, or Asn-686 (N147A-gB, N557A-gB, or N686A-gB) were lower than that of WT-gB (Fig. 7A, top panel). However, the binding of anti-gB mAb to N147A-gB also decreased (Fig. 7A, bottom panel). When we generated the GPI-anchored form of N147A-gB, the expression level of N147A-gB-GPI was comparable with that of WT-gB-GPI and the binding of MAG-Ig to N147A-gB-GPI was similar WT-gB or each mutated gB in which putative O-glycosylation sites were mutated was cotransfected into 293T effector cells with gH and gL. The effector cells were cocultured with other 293T target cells transfected with MAG, followed by luminescence measurements. The relative fusion efficiencies are shown on the basis of representative data from three independent experiments. The error bars represent the mean Ϯ S.D. on the basis of six replicate samples. Statistical differences were determined using Student's t test. Each mutated gB without a p value did not differ significantly compared with wild-type gB (WT). p Ͻ 0.05 was considered significant. AUGUST 7, 2015 • VOLUME 290 • NUMBER 32 VZV-induced Fusion via Sialic Acid on N-and O-glycans of gB to the binding to WT-gB-GPI (Fig. 7B). Cells that expressed gB with mutations in putative N-glycosylation sites, Asn-147, Asn-557, or Asn-686 (N147A-gB, N557A-gB, or N686A-gB), exhibited reduced cell-cell fusion with MAG-expressing cells compared with cells that expressed WT-gB (Fig. 7C). Therefore, the decreased fusion efficiency caused by the mutation of Asn-147 appeared to be attributable to the decreased surface expression of mutant gB (Fig. 7A). On the other hand, Asn-686 might additionally contribute to cell-cell fusion because N686A-gB significantly but slightly decreased fusion efficiency. Therefore, Asn-557 in gB mainly appears to be the key N-glycosylation site involved in membrane fusion with MAG-expressing cells. Asn-557 and Asn-686 Are Involved in gB Sialylation-To confirm whether Asn-557 and Asn-686 are sialylated, 293T cells transfected with WT-gB, N557A-gB, N686A-gB, and mock transfectants were labeled with ManNAz (Fig. 8, A and B). Using this labeling method, 5-40% of the total SAs in gly-coproteins and glycolipids were labeled with ManNAz, which was detected by biotin-labeled phosphine (45,46). In this analysis, the overexpression of gB in 293T cells significantly increased SAs on the cell surface compared with mock transfectants. Expression levels of N557A-gB and N686A-gB on the cell surface were almost the same as that of WT-gB (data not shown). The cell surfaces of the N557A-gB-and N686A-gBtransfected cells were weakly labeled with ManNAz compared with WT-gB, although it was labeled more than the tunicamycin-treated WT-gB transfectants. The cell surfaces of N686A-gB-transfected cells were significantly but weakly labeled with ManNAz compared with WT-gB-transfected cells. These results suggest that both N557 and N686 are involved in sialylation of gB. Discussion It has been suggested that the SAs of envelope proteins are involved in VZV infection, although the mechanism has remained unclear (31,32). Sialylation of gB, the gE-gI complex, and the gH-gL complex have been reported previously (27)(28)(29)(30)(31)(32). In a previous study, we also demonstrated that MAG associates with gB and gE but not with the gH-gL complex (10). It has been suggested that gE is essential for VZV survival (27) but that gE and gI are not involved in MAG-mediated cell-cell fusion (10). Therefore, we investigated whether the SAs on gB are involved in membrane fusion during VZV infection of MAG-expressing cells. In addition, MAG has been reported to bind ligands in an SA-dependent or SA-independent manner (22,23). In this study, we demonstrated that the SAs on VZV gB are required for the interaction between gB and MAG. In addition, we observed that WT-MAG-Ig immunoprecipitated 140-kDa gB more efficiently than 124-kDa gB from VZV-infected cells. It has been reported that 140-kDa gB is the sialylated form of 124-kDa sulfated gB (29). These suggest that WT-MAG tends to recognize sialylated gB more strongly than unsialylated gB. Without sialidase treatment, VZV also lost its infectivity as time , followed by flow cytometry analysis. C, WT-gB or mutated gB in which putative N-glycosylation sites were mutated was cotransfected into 293T effector cells with gH and gL. The effector cells were cocultured with other 293T target cells transfected with MAG, followed by luminescence measurements. The relative fusion efficiencies are shown on the basis of representative data from three independent experiments. The error bars represent the mean Ϯ S.D. on the basis of six replicate samples. Statistical differences were determined using Student's t test. Each mutated gB without a p value did not differ significantly compared with wild-type gB (WT). p Ͻ 0.05 was considered significant. passed, but the infectivity of VZV treated with sialidase was lower than that of untreated VZV. In addition, sialidase treatment abrogated cell-cell fusion incompletely probably because newly synthesized and sialylated gB on effector cells might mediate weak fusion. To determine whether SAs are involved in VZV infection, we employed a MAG mutant where Arg-118 was substituted with Ala. R118A-MAG did not mediate membrane fusion with VZV envelope glycoproteins. Although R118A-MAG was expressed on the cell surface as well as WT-MAG, there is a possibility that substitution of Arg-118 with Ala caused a conformational change in MAG and affected cell-cell fusion. Because the structure of MAG has not been reported, we do not know whether Arg-118 affects the conformation of MAG. However, Arg-118, which forms an important salt bridge with the negatively charged carboxyl group of SA, is conserved among other Siglec family molecules. The structures of other Siglec family molecules suggest that Arg-118 in MAG is directly involved in binding to SAs (11,12,(47)(48)(49)(50)(51)(52)(53)(54). Therefore, mutation in Arg-118 in MAG seems to have affected the association with SAs rather than the conformation of MAG. PILR␣, which has a structure similar to the structures of Siglec family molecules ,recognizes both the sialylated sugar chain and polypeptides (44). Therefore, there is a possibility that MAG also recognizes both sialylated sugar chains and polypeptides, although the binding of MAG to fibronectin and gangliosides such as GD1a and GT1b was blocked by SA-containing glycans (19 -21). Previously, it has been reported that MAG binds to both N-glycans and O-glycans in an SA-dependent manner. Some studies have shown that MAG binds to O-glycans more strongly than N-glycans (55), whereas other studies have demonstrated that MAG has a higher affinity for N-glycans than O-glycans (56,57). In addition, fibronectin, which possesses more N-glycans than O-glycans, is suggested to be one of the MAG ligands that binds in an SA-dependent manner (19). Similarly, MAG binds to gB with seven putative N-glycosylation sites and 17 putative O-glycosylation sites in the extracellular domain. The treatment of gB-transfected cells with tunicamycin, DNJ, or benzyl-␣-GalNAc decreased the binding of MAG-Ig to the gB-transfected cells. The fusion efficiency of cells treated with tunicamycin, DNJ, or benzyl-␣-GalNac was higher than that of cells treated with sialidase. Furthermore, Nand O-glycosylation inhibitors additively inhibited cell-cell fusion. These results suggest that both the N-and O-glycans on gB are involved in cell-cell fusion. Anti-gB mAb (151) inhibits VZV infection, but this mAb failed to precipitate gB from tunicamycin-treated VZV-infected cells (29), thereby supporting the hypothesis that the N-glycans on gB are required for VZV infection. In this study, VZV infection was blocked by N-or O-glycosylation inhibitors, although there is a possibility that not only gB but also other essential molecules of VZV might be modified by these inhibitors, resulting in the impairment of VZV replication or survival. In fact, we demonstrated that two of seven putative N-glycosylation sites (Asn-557 and Asn-686) are sialylated and that they are involved in binding to gB, thereby mediating significant cell-cell fusion. Weak labeling of the cell surface of N686A-gB-transfected cells by ManNAz might result in a trivial decrease of cell-cell fusion by N686A-gB-transfected cells. The mutation of a single glycosylation site, Asn-557, on gB decreased the amount of SAs compared with WT gB on the cell surface. This suggests that more sialylated N-glycans are attached on Asn-557 than other glycosylation sites, including Asn-686. N-glycans differ in branch number, composition, length, capping arrangements, and core modifications, resulting in the generation of N-glycans possessing a various number of SAs (58). Indeed, glycoproteins often have a range of different N-glycans on a particular N-glycosylation site (58,59). Different sites in a molecule have different subsets of N-glycans with different numbers of SAs (58,60,61). Therefore, the amount of SAs on each gB glycosylation site might be different, and Asn-557 of gB seems to be the major glycosylation site for sialylated N-glycans. On the other hand, although expression levels of N557A-gB and N686A-gB on the cell surface were almost the same as that of WT-gB, there is a possibility that the substitution of Asn-557 and Asn-686 with Ala might have caused a conformational change in gB, resulting in loss of cell-cell fusion. Indeed, T265A-gB lost fusion function, although the expression on the cell surface and binding to MAG-Ig were not impaired. Thr-265 is localized in a gB fusion loop that is essential for membrane fusion (62). In addition, Asn-557 and Asn-686 are localized in domain III and domain V of gB, respectively, although their functions are unclear (62). Therefore, there is a possibility that the conformational changes of gB might be caused by point mutations, N557A and N686A as well as T265A, and influence fusion function. On the other hand, the predicted O-glycosylation sites were not accurate compared with the predicted N-glycosylation sites because many enzymes are involved in O-glycan synthesis, whereas a single oligosaccharyltransferase enzyme transfers an oligosaccharide to the target protein during N-glycan generation (63,64). The fusion efficiencies of cells transfected with gB that possessed mutations in each putative O-glycosylation site were not specifically decreased compared with the fusion efficiency of cells transfected with WT-gB. However, O-glycosylation of gB appears to be necessary for cell-cell fusion because the treatment with benzyl-␣-GalNAc significantly impaired cell-cell fusion. It is possible that there are other O-glycosylated Ser/Thr residues or that O-glycans generated on multiple O-glycosylation sites might be synchronously involved in cell-cell fusion. Cell-cell fusion assays are powerful tools for the direct examination of the molecules essential for membrane fusion during viral entry because it is not necessary to consider the effects of other viral components. The cell-cell fusion assay using VZV gB, gH, gL, and MAG is the only system available for analyzing the mechanism of VZV membrane fusion. In this study, we investigated the involvement of SAs in membrane fusion during VZV entry using cell-cell fusion assays. This system is useful for investigating the roles of SAs but also other molecular modifications of the virus and host during VZV infection. Our study provides novel insights into the molecular mechanisms involved in membrane fusion during VZV infection.
v3-fos-license
2017-07-12T17:41:55.182Z
2017-04-13T00:00:00.000
15074042
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12245-017-0141-z", "pdf_hash": "6796ceb1b5645d0505c4bbd385666790e97dd2b4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1372", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6796ceb1b5645d0505c4bbd385666790e97dd2b4", "year": 2017 }
pes2o/s2orc
Angioedema in the emergency department: a practical guide to differential diagnosis and management Background Angioedema is a common presentation in the emergency department (ED). Airway angioedema can be fatal; therefore, prompt diagnosis and correct treatment are vital. Objective of the review Based on the findings of two expert panels attended by international experts in angioedema and emergency medicine, this review aims to provide practical guidance on the diagnosis, differentiation, and management of histamine- and bradykinin-mediated angioedema in the ED. Review The most common pathophysiology underlying angioedema is mediated by histamine; however, ED staff must be alert for the less common bradykinin-mediated forms of angioedema. Crucially, bradykinin-mediated angioedema does not respond to the same treatment as histamine-mediated angioedema. Bradykinin-mediated angioedema can result from many causes, including hereditary defects in C1 esterase inhibitor (C1-INH), side effects of angiotensin-converting enzyme inhibitors (ACEis), or acquired deficiency in C1-INH. The increased use of ACEis in recent decades has resulted in more frequent encounters with ACEi-induced angioedema in the ED; however, surveys have shown that many ED staff may not know how to recognize or manage bradykinin-mediated angioedema, and hospitals may not have specific medications or protocols in place. Conclusion ED physicians must be aware of the different pathophysiologic pathways that lead to angioedema in order to efficiently and effectively manage these potentially fatal conditions. Background Angioedema is a relatively common presentation in the emergency department (ED). The lifetime prevalence of angioedema and/or urticaria in the United States is about 25% and results in more than one million ED visits each year [1,2]. Angioedema is mediated by several mechanisms, including histamine and bradykinin (Fig. 1). Diagnosis of the specific type of angioedema is essential for appropriate treatment [3]; however, many ED physicians may not know how to distinguish different types of angioedema or how to effectively treat less common presentations [4]. Each year in the USA, angioedema or allergic reactions lead to more than one million ED visits [2]. Of these, approximately 110,000 are coded as angioedema (either hereditary or acquired) compared with 979,400 coded as allergic reactions [2]. Approximately 42.5% of the visits coded as allergic reactions also include a code for urticaria [2]. Between 2280 and 5000 visits to US EDs each year are attributable to hereditary angioedema (HAE) [5,6], accounting for a rate of 1.87:100,000 ED visits [5]; however, these figures may underrepresent the true level of angioedema-related ED use [2,5,6]. Similar data from Italy suggest that 0.37% of all ED visits are related to angioedema [7], and a recent Canadian study estimated that 1:1000 ED visits are angioedema related [8]. Finally, a survey in the UK revealed that 30% of patients with hereditary or acquired angioedema have visited the ED [9]. Hospitalization can be used as a proxy measure of the severity of angioedema. Patients with undifferentiated angioedema (i.e., including both histamine-and bradykininmediated angioedema) visiting the ED are admitted for inpatient care (11% of ED visits) more frequently than patients with allergic reactions (2.2% of ED visits) [2,5]. Hospitalization rates following ED visits for HAE (45-50%) and ACEi-induced angioedema (42-66%) are even higher [5,6,10,12]. Hospitalizations for angioedema have increased over the last 15 [15,16]. The rise is thought to be related to increased prescribing of ACEis over this time period [15,16]. Mortality data for angioedema are lacking; however, one study demonstrated a small but ever-present risk of death by asphyxiation in patients with HAE with development of fatal laryngeal attacks within as little as 15 min [17]. Crucially, the risk of death is three-to nine-fold higher in patients who have not received a confirmed diagnosis of HAE, emphasizing the importance of preparation and awareness in preventing adverse outcomes [17]. Although angioedema with a well-defined bradykinin-mediated pathogenesis is relatively rare, most ED staff will likely encounter a case at some point in their career. Therefore, awareness of bradykinin-mediated angioedema is important. Because bradykinin-mediated angioedema is uncommon, there generally are not protocols in place in the ED and there is a lack of immediate access to appropriate drugs for bradykinin-mediated angioedema. For example, a recent survey of British EDs demonstrated that medications required to treat bradykinin-mediated angioedema were available in the majority of hospitals with Fig. 1 Schematic of biochemical pathways responsible for a histamine-mediated angioedema [62] and b bradykinin-mediated angioedema [26]. *C1 esterase inhibitor disrupts the action of factor XIIa and kallikrein. Ecallantide inhibits the action of kallikrein. Icatibant blocks bradykinin B2 receptors. ACE angiotensin-converting enzyme, IgE immunoglobulin E specialist immunology services, but were not readily accessible in the ED (e.g., located in the main pharmacy). Additionally, only half the hospitals surveyed had established guidelines for the use of these medications [18]. Lack of protocols and access to medications can lead to treatment errors and poor outcomes for ED patients presenting with bradykinin-mediated angioedema [19,20]. This paper reports the findings and recommendations of two expert panels of 16 international experts in angioedema and emergency medicine convened during 2013 [21,22]. The aim of this paper is to provide practical guidance on the early identification of bradykinin-mediated angioedema in the ED to improve the diagnosis and outcomes of angioedema attacks. Angioedema: subtypes and characteristics Angioedema is a transient subcutaneous or submucosal swelling that is non-pitting when pressure is applied [1]. Angioedema is distinct from edema, which is caused by accumulation of fluid in the interstitium and characterized by persistent pitting with pressure. Angioedema can be mediated by histamine, bradykinin, or other mechanisms [1]. Histamine-mediated angioedema Histamine-mediated angioedema often presents with urticaria and episodes of swelling that usually subside within 24-37 h. Histamine-mediated angioedema, also called allergic angioedema, is a type I immunoglobulin E-mediated hypersensitivity immune response of mast cell degranulation. This reaction occurs with previous sensitization to allergens such as insect stings, foods, and drugs [1]. Hereditary angioedema Most cases of HAE arise from mutations in the gene encoding for C1 esterase inhibitor (C1-INH), resulting in either low plasma concentrations of C1-INH (HAE type I) or normal concentrations of functionally impaired C1-INH (HAE type II) [1]. HAE affects approximately 1:50,000 people in the general population [22]. The mechanism of a third type of HAE with normal C1-INH concentrations and function [1] is not yet fully understood [24]. Attacks in patients with HAE with normal C1-INH are similar to those in patients with HAE types I and II. Acquired angioedema Bradykinin-mediated angioedema also can develop later in life and is known as acquired angioedema. This condition is very rare, with an approximate prevalence of 1:100,000 to 1:500,000 in the general population [25]. The symptoms of acquired angioedema are the same as those for HAE; the distinguishing characteristic of acquired angioedema is that almost all cases are diagnosed during or after the fourth decade of life and are often associated with an underlying lymphoproliferative disorder and/or antibodies directed against C1-INH [25]. ACEi-induced angioedema Another cause of bradykinin-mediated angioedema is associated with ACEis [26]. Angiotensin-converting enzyme is one of the two enzymes that degrade bradykinin; ACEis can cause accumulation of bradykinin that results in angioedema (ACEi-induced angioedema) [26]. Symptoms of ACEi-induced angioedema are usually localized in the face or upper aerodigestive tract; the main characteristic is erythema (without itching) lasting 24-72 h, followed by spontaneous remission [27]. Reports of rare abdominal involvement have been published [28]. ACEiinduced angioedema has been reported as a side effect affecting 0.1-0.7% of patients and up to 1.6% in some studies [29]; a large US study reported an incidence of 0.2% [30]. As the prevalence of cardiovascular conditions increases with age, ACEi-induced angioedema is likely to present more frequently in patients >40 years of age [11]. ACEi-induced angioedema has been shown to be more prevalent in female and black patients [30,31]. Angioedema associated with other drugs Less commonly, an increased risk of nonhistaminergic angioedema has been associated with other classes of drugs, including nonsteroidal anti-inflammatory drugs (NSAIDs), antibiotics, and angiotensin receptor blockers (ARBs) [7,32]. NSAID-related angioedema is estimated to occur in 0.1-0.3% of persons exposed to NSAIDs [33,34]. Patients with underlying diseases such as asthma and chronic urticaria are at much higher risk, and up to 35% of patients will have a reaction upon exposure to NSAIDs. There are multiple mechanisms of action for NSAIDinduced angioedema, including COX-1 inhibition leading to production of the inflammatory mediators, cysteinyl leukotrienes, and IgE-mediated hypersensitivity [35]. NSAID-mediated angioedema can be managed by stopping the NSAID and treating the angioedema in a manner similar to histaminergic angioedema. For ARBs, the angioedema incidence rate per 1000 person-years was 1.66 (95% CI 1.47, 1.86) [32]. The development of asymmetric angioedema in association with recombinant (r) tissue plasminogen activator (tPA) therapy for acute ischemic stroke has become a recognized phenomenon since first reports appeared in the literature [36]. Depending on the reporting center, rates of rtPA-associated angioedema range from 1.2 -5.1% of acute ischemic stroke patients treated with rtPA [36][37][38][39][40]. Additionally, concurrent use of ACEis seems to increase risk [36,37]. Administration of rtPA not only activates components of the complement system including histamine but also leads to plasmin-mediated release of bradykinin [40,41]. While most cases of rtPAassociated angioedema are mild and resolve over 24 h, some cases are rapidly progressive and life threatening. Distinguishing histamine-versus bradykinin-mediated angioedema To ensure that patients are managed correctly, identification of the underlying cause of angioedema on presentation is essential. Unfortunately, no validated, rapid, point-of-care diagnostic test is available to differentiate a bradykinin-mediated from a histaminemediated attack; however, a number of distinguishing features can guide the diagnosis (Fig. 2 [21,42] and Fig. 3). Urticaria is common in histamine-mediated angioedema. A recent Canadian study found that 29.8% of patients presenting with angioedema also had urticaria, and this was significantly associated with triggers such as insect stings, certain foods, or drugs (other than NSAIDs and ACEis) [8]. Itching is not usually associated with bradykinin-mediated attacks. Urticaria does not occur with bradykinin-mediated angioedema (hereditary or ACEi induced; Fig. 3). Speed of onset also may be a differentiating factor (Fig. 4 [21]). Histamine-mediated angioedema can occur quickly (≤1 h of exposure to allergens). Hereditary and acquired angioedema symptoms usually have a slower, progressive onset and develop over several hours, but occasionally can develop quickly, or appear to do so (e.g., if an attack starts while a patient is sleeping). Fig. 2 Flow diagram of diagnosis of angioedema in the emergency department [21,42]. ACEi angiotensin-converting enzyme inhibitor, HAE hereditary angioedema Untreated hereditary and acquired angioedema attacks tend to be more severe and persistent than histaminemediated angioedema attacks, typically persisting for 48-72 h and occasionally up to 5 days [43]. Also, bradykinin-mediated attacks are more likely to have abdominal involvement (Fig. 3) [21] with at least 50% of attacks involving the abdomen [44]. If a patient with a prior diagnosis of HAE presents with abdominal pain and/or swelling, gastrointestinal angioedema should be considered in the differential diagnosis, even if the patient has not previously experienced similar attacks [45]. In cases with no prior diagnosis of HAE, a differential diagnosis is much more difficult. Patient history is vital in patients presenting with abdominal pain. A history of recurrent abdominal pain and swelling, especially if accompanied by a family history of similar symptoms, may suggest HAE [46]. Response to treatment with antihistamines, corticosteroids, and epinephrine may distinguish histamine-and bradykinin-mediated angioedema. Histamine-mediated angioedema will respond to treatment with antihistamines, corticosteroids, and epinephrine, whereas bradykinin-mediated (including hereditary, acquired, and ACEi-induced) angioedema will not. Although response or lack of response to treatment is not an appropriate diagnostic measure in the ED, the effect of Fig. 4 Schematic representation of angioedema attack onset and duration. Histamine-mediated angioedema attacks tend to have rapid onset and resolution. Bradykinin-mediated angioedema usually develops more slowly and can persist for ≤5 days, although angiotensin-converting enzyme inhibitor (ACEi)-induced angioedema will usually resolve ≤48 h once the drug is discontinued treatment can be a useful clinical clue for follow-up and subsequent diagnosis. Confirmatory tests A diagnosis of hereditary or acquired angioedema can be confirmed with blood tests; however, currently available blood tests cannot confirm ACEi-induced angioedema [1,21,47,48] (Table 1). In a patient presenting with new-onset isolated angioedema with or without a family history, consider obtaining a screening C4 level to aid in diagnosis; 25% of HAE cases may be spontaneous mutations and therefore may not be associated with a family history. Results of blood tests taken during an attack are unlikely to be available soon enough to inform decisions in the ED, but can be useful for follow-up and future management. Angioedema: management in the ED Presentation of angioedema in the ED will likely fit one of the four categories: Swelling of the face (including hemifacial swelling) or lips (Fig. 5a); Swelling of the tongue (Fig. 5b); Laryngeal swelling; Abdominal swelling, pain, or discomfort, which can be severe. Peripheral cutaneous angioedema also is common; however, patients may not attribute the same level of risk to cutaneous angioedema as with angioedema at other sites, so presentation in the ED may be less frequent. These categories of angioedema are not mutually exclusive, and swelling of multiple sites may occur. Upper airway management Angioedema can progress rapidly. The first step is to consider whether the airway is "safe" or not. If it is not safe, and no time is available to make further assessments, the local anaphylaxis protocol should be followed and mechanical intervention may be needed, either by intubation or cricothyrotomy/tracheotomy [42]. Visualization, ideally by nasopharyngeal laryngoscopy, should be considered in patients with stridor or hoarseness to evaluate the degree of laryngeal angioedema [21]. The need for intubation is significantly greater with presentations of laryngeal or pharyngeal involvement versus swelling of the lips and face. As a general rule, if swelling is in front of the teeth, drug treatment is likely to be sufficient; if swelling is behind the teeth, mechanical airway management should be considered [49]. The incidence of intubation increases with age [50]. If intubation is required, a nasopharyngeal or endotracheal airway should be considered as first choice [21]. Bradykinin-mediated angioedema commonly affects the lips and tongue, potentially obstructing the oropharyngeal pathway, whereas the nasal passage is unlikely to be obstructed. In this case, supraglottic devices (e.g., laryngeal masks) are not appropriate. The gold standard of airway management for bradykinin-mediated angioedema is an awake nasopharyngeal intubation. Bradykinin-mediated angioedema (hereditary, acquired, ACEi induced) can be triggered by mild trauma; thus, oral and laryngeal edema can be worsened by visualization and intubation attempts. Airway management of a patient with laryngeal angioedema is fraught with danger. Before attempts at intubation, a team experienced in nasopharyngeal intubation and the emergency delivery of a surgical airway should be summoned. Although stabilization of the airway is the highest priority, every effort should be made to determine the cause of the angioedema attack (known drug or food allergies, existing hereditary or acquired angioedema, ACEi use, family history, etc.) to ensure that the attack is managed appropriately. Medications for histamine-mediated angioedema For histamine-mediated angioedema and angioedema of undifferentiated etiology, standard treatment includes H 1 and H 2 antagonists and oral corticosteroids [21]. Airway swelling or hypotension are indications for epinephrine at a dose of 0.2-0.5 mg administered intramuscularly [51]. In the absence of anaphylaxis, epinephrine is not indicated for nonlife-threatening symptoms that do not involve the airway. Medications for bradykinin-mediated angioedema Several medications are available for the treatment of acute HAE attacks ( Table 2). Although these medications are not US Food and Drug Administration approved for ACEi-induced or acquired angioedema, recent and ongoing research suggests a broader role in treatment of bradykinin-mediated attacks in patients who do not have HAE [21,40]. C1-INH concentrates, which are derived from pooled donor plasma, are available. These C1-INH replacement products inhibit factor XII and kallikrein activity, which reduces bradykinin production [21]. Berinert® necessitating administration by a health care professional who can manage hypersensitivity reactions and observation for ≥1 h following administration [21]. Icatibant is a synthetic selective bradykinin-2 receptor antagonist that inhibits the vascular effects of bradykinin and is approved for treatment of HAE attacks with subcutaneous administration by either patients or physicians [21]. No head-to-head comparative trials of these agents are available. About 10% of patients may require a second dose for incomplete response or symptom recurrence [21,40,41,52]. Several clinical trials have recently been reported using ecallantide or icatibant for ACEi-induced angioedema. Lewis [40], corroborating the results of a case series [54]. Although not US Food and Drug Administration approved for acute treatment, fresh-frozen plasma (FFP) also may be used for ACEi-induced and other bradykinin-mediated angioedema. FFP provides volume replacement and is effective in most cases of bradykinin-mediated angioedema with onset of symptom relief in approximately 30 to 90 min [55]. FFP also is less expensive than targeted treatments, but can cause hypersensitivity reactions and rarely, worsening angioedema symptoms [21,22]. H 1 and H 2 antagonists, oral corticosteroids, and epinephrine are unlikely to improve bradykinin-mediated angioedema [21]. For angioedema associated with tPA, the mainstay of treatment thus far has been the combination of intravenous corticosteroids and antihistamines. Given the effectiveness of novel treatments such as icatibant and ecallantide for patients with HAE as well as ACEiassociated angioedema, these drugs also hold promise for angioedema associated with thrombolytic therapy. Discharge Patients without involvement of the tongue or larynx and with no other threat to the airway (Ishoo stage I; Ishoo grading is not validated but may be helpful as a guide) can be discharged after a number of hours of observation (≥2-6 h or longer as necessary) to ensure that symptoms have begun to resolve with no indication of development of airway obstruction. Discharge should not occur unless the airway has been assessed as Ishoo stage I or below. Patients with mild angioedema of the lips may be discharged with specific treatment if no further progression is observed [21,49]. On discharge, if possible, patients should be prescribed suitable medication, to control recurrent symptoms with referral for follow-up [21]. For patients with histamine-mediated or undifferentiated angioedema, discharge plans should include epinephrine and follow-up with an angioedema specialist to determine the appropriate treatment [21]. Patients with HAE should have access to targeted treatment that can be administered outside of the health care setting, for example, icatibant or a C1-INH inhibitor. For patients with nonhistamine-mediated angioedema, ACEis should be discontinued immediately with prescription of an alternative class of medication with a different mechanism of action. Patients should be informed that angioedema may occur or recur several weeks after cessation of ACEi therapy [56]. For patients who had a life-threatening attack of angioedema, both ACEis and ARBs should be avoided. For patients with less severe reactions to ACEi, ARBs may be used with close monitoring in patients with conditions that may particularly benefit from reninangiotensin-aldosterone inhibition, such as chronic heart failure [36,39]. Follow-up Patients without a prior diagnosis should be referred to their primary care physician, an immunologist, or an allergy clinic on discharge depending on the suspected cause of angioedema. If HAE is suspected, the patient should be referred to an immunologist with experience managing HAE to confirm the diagnosis and discuss possible use of prophylactic and/or on-demand therapies [21]. Results of blood tests taken in the ED during an attack, while unlikely to guide acute treatment, can be valuable for subsequent follow-up in the outpatient setting. Preparing your ED To ensure optimal treatment of patients presenting with angioedema, each ED would benefit from having an established protocol, algorithm, or management plan in place that is displayed or easily accessible. Recently published national and international guidelines may be a good basis for this plan [18,21,22,42,43,47,48,[57][58][59][60][61]. Beyond training staff to recognize, differentiate, and manage angioedema in the ED, access to effective and well-tolerated drugs to treat bradykinin-mediated angioedema is vital. Conclusions Angioedema is a relatively common presentation in the ED and is potentially fatal. Angioedema management in the ED starts with assessing and securing the airway while initiating specific treatment. To ensure appropriate treatment and management, determination of whether the angioedema is mediated by histamine or bradykinin is essential. With the current lack of a reliable point-ofcare test to distinguish the two pathophysiologies, ED physicians should familiarize themselves with available indicators to help guide treatment decisions. Histaminemediated angioedema should be treated with H 1 and H 2 antagonists and oral corticosteroids along with epinephrine, as appropriate. Patients with HAE should receive a medication indicated for treating HAE such as a C1-INH inhibitor, ecallantide, or icatibant. Other causes of bradykininmediated angioedema may be treated with FFP. Hospitals should ensure that adequate procedures and treatments are in place for the management of angioedema. Authors' contributions All authors contributed equally to this work; all authors contributed to the content, reviewed and revised drafts of the work, and approved the final version. Competing interests JAB is a clinical investigator and consultant for CSL Behring, Dyax, Pharming/ Santarus, Shire, and ViroPharma (part of the Shire Group of Companies), and a speaker for CSL Behring, Dyax, Shire, and ViroPharma. PC participated in the Hereditary Angioedema Global Forum, organized and sponsored by Shire. TH has acted in a consultant/advisor capacity and lectured/spoken at a company-sponsored meeting for Shire, has received a travel grant from Shire, and has been an investigator in company-sponsored scientific studies for BioAlliance Pharma, CSL Behring, Jerini, and ViroPharma. JH has chaired Shire-sponsored European and global advisory boards on the emergency recognition and management of angioedema and has spoken extensively at Shire-sponsored events across Europe. Consent for publication The patient in Fig. 5b consented to publication of the image. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
v3-fos-license
2023-07-11T16:05:37.948Z
2023-07-01T00:00:00.000
259569702
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-1729/13/7/1506/pdf?version=1688463488", "pdf_hash": "fe3c9fb0ddfbe81bfe732b98633c783cbdf0aa2a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1373", "s2fieldsofstudy": [ "Biology" ], "sha1": "9f810d14c8bd4d70182e9ea572336f4e19b67552", "year": 2023 }
pes2o/s2orc
The Hypothesis of a “Living Pulse” in Cells Motility is a great biosignature and its pattern is characteristic for specific microbes. However, motion does also occur within the cell by the myriads of ongoing processes within the cell and the exchange of gases and nutrients with the outside environment. Here, we propose that the sum of these processes in a microbial cell is equivalent to a pulse in complex organisms and suggest a first approach to measure the “living pulse” in microorganisms. We emphasize that if a “living pulse” can be shown to exist, it would have far-reaching applications, such as for finding life in extreme environments on Earth and in extraterrestrial locations, as well as making sure that life is not present where it should not be, such as during medical procedures and in the food processing industry. Introduction Life and motion are intrinsically related. All life forms move. Even if they do not have specific appendages for movement (which most life forms, even microbes, do) and are considered "non-motile", they move due to their dynamic life processes that each living system must perform. Life requires compartmentalization to separate its insides from its external environment and life tries to achieve homeostasis by exchanging nutrients and wastes from the inside and the outside of the cell. These requirements will result in detectable changes and can be perceived as currents or as motions due to momentum conservation, changes in geometry, or changes in volume that occur within a single cell. This is the case even if life forms only adjust to their microenvironment around them and exchange solutes and gases to maintain intracellular equilibrium and disequilibrium to their outside environment. The exact amount of these changes is unknown at this time, and most changes are expected to lay below the detection limit. Investigating those would provide us with much desired insights into the internal working of a microorganism (and also on the possibility of the presence of a "living pulse"). We consider here the motion within life forms as a physical property and universal biosignature [1], which has the advantage of not being dependent on the given biochemistry of an organism and as such it also applies more broadly to life as we may not know it. The way an organism is moving in respect to its outside environment is termed motility and there are broad types of movements exhibited by microorganisms. Most familiar as a means of fast microbial movement (swimming and swarming) are flagella. Other microorganisms have pili that allow twitching and others glide through focal adhesions. Some are even non-motile or just move passively (Table 1). There is a huge diversity of microbial motility. In a recent review, it was claimed that there is a total of 18 different types of motility and that additional ones are expected to be discovered in the near future [2]. In fact, for most microbes, we do not even know how they move. Nevertheless, paths taken by microbes can be tracked with machine learning methods and the type of organism can be identified, in some cases down to the species level [3]. We do know, however, that motility is an early trait of the evolution of life that is present in all kingdoms of life [4][5][6]. Motility has recently been recognized as an important biomarker in astrobiological investigations [7,8] and specialized instrumentation such as holographic microscopy has been devised to detect it in a variety of environments [9,10]. While motility is the movement of a cell with respect to its outer environment, the intracellular motions will only be visible by high-resolution microscopy or more macroscopically when growth and reproduction occur. The Hypothesis There is motion within a cell from the myriad of internal processes and also at the cell boundaries when an organism interacts with its natural environment. Venturelli et al. [11] suggested that living organisms exhibit motion at the nanoscale that is above and beyond the frequency of Brownian motion such that it can be considered a universal signal of cellular life. Here, we designate the sum of these internal motions as a "living pulse" in analogy to the rhythmic pattern exhibited by complex organisms during breathing. Whether the "living pulse" is only a stochastic pattern resulting from the motions and adjustments to the above-mentioned changes or whether there is an intrinsic periodic pattern-perhaps as an emergent property of life compared to just chemical systems and in analogy to the pulse in more complex animals-is uncertain, but the below suggested investigations are hoped to reveal just that. We hypothesize that each microorganism has such a "living pulse", a rhythmic pattern that in principle can be detected by state-of-the-art technology. Experimental evidence that such a "living pulse" exists comes from nanomechanical oscillators [12], which detect forces in the order of a piconewton and which were used to characterize living specimens and their metabolic cycles [13][14][15][16]. For example, cantilevers were used to investigate the activity of a cell's molecular motors [17] and the particular vibra-tions of living Saccharomyces cerevisiae [18,19]. Cellular nanomotion has also been detected and monitored by micro-and nano-fabricated sensors [20][21][22][23] independent of cellular motility [11]. Extremely sensitive changes in mass and the metabolically induced oscillations of microorganisms have been measured using quartz crystal microbalances [24][25][26] and atomic force microscopy (AFM) [27,28], including metabolically induced oscillations of microorganisms [29,30], which support the notion that microbial metabolic activity could be utilized for life detection at the cellular scale [31,32]. The measured force by nanomechanical oscillators in the order of a piconewton [12] fares well with our estimate of the same order for the force required for one ion to go through a cellular membrane (about 2 piconewtons). This value is obtained by assuming a resting potential of 70 mV and an assumed thickness of 5 nm for a membrane. The equilibrium potential is then calculated using the Nernst equation, which is multiplied by a unit charge (1.6 × 10 −19 C) to determine the amount of force needed. If we assume an ionic flux of 100 ions/sec through the membrane [33], we should be able to pick up a signal that required the force of at least 100 piconewtons/second. The Question of Detection While we interpret the hypothetical "living pulse" of a microorganism to be the sum of the internal processes occurring and the interactions of the membrane with the outer environment, especially movements across the ion channels, the magnitude is still expected to be miniscule. However, new microscopes, such as stimulated emission depletion (STED) microscopes, allow the observation of the movement of organelles or vesicles within the cell and also pick up autofluorescence in the cell and thus will at least allow us to arrive at better estimates at which frequency and magnitude a signal pattern could be expected ( Figure 1). An overview of a sample cell can be obtained with a convolutional microscope. AI software is then employed to identify rare and anomalous observations, which are then further scrutinized with STED microscopy at a super-resolution of 50 nm or below [34]. The maximum frame rate per second is about 30 for the imaging of living organisms, which might be in the detection range to detect the "living pulse". This would allow us to visualize changes in cell structure and fluorescent markers associated with cellular processes such as signaling or contractions. If a fluorescent marker is used to label a specific protein or organelle within the cell that is hypothesized to change in response to the "living pulse", STED microscopy could be used to visualize these changes with high spatial resolution to detect the presence of a pattern. In addition, it will be useful to monitor any morphological changes at these magnifications. For example, if cells are used without a rather rigid cell wall, vibrational patterns might be identified at the cell membrane. A complementary technique to use would be scanning ion-conductance microscopy (SICM), which is a non-invasive scanning technique employed to study dynamic cellular processes at the nanoscale, particularly those that are related to ion conductance [35]. One specific approach we propose is to use a dead cell as a control, observe it for a specific time period, and record any instrument "flickers". Then, use a living cell, observe and record its life field view from which the white noise of the dead cell is subtracted. Software could be used to remove particular wavelength periodicities to reveal any intrinsic pattern to the cell: the "living pulse". The "living pulse" hypothesized here is not to be mistaken with the circadian rhythm, which was not only found in eukaryotes [36] but has also been detected in cyanobacteria [37][38][39]. It is thought to be exhibited in cyanobacteria due to a selective advantage of cyanobacteria being adapted to the light-dark cycle [40]. Thus, a circadian rhythm is an organism's response to environmental cycles in contrast to the "living pulse", which is thought to be an inherent rhythmic pattern to a microbial organism. A circadian rhythm may also exist in the purple non-sulfur bacterium Rhodopseudomonas palustris and in Bacillus subtilis based on gene expression patterns [41,42] and has also been proposed for the human microbiome [43], but it is unclear which or whether all microbes have a circadian rhythm. While the circadian rhythm is unrelated to the "living pulse" being an adaptation to environmental cues, the "living pulse" may be more pronounced during times of higher activity, such as during the light cycle in cyanobacteria. A complementary technique to use would be scanning ion-conductance microscopy (SICM), which is a non-invasive scanning technique employed to study dynamic cellular processes at the nanoscale, particularly those that are related to ion conductance [35]. One specific approach we propose is to use a dead cell as a control, observe it for a specific time period, and record any instrument "flickers". Then, use a living cell, observe and record its life field view from which the white noise of the dead cell is subtracted. Software could be used to remove particular wavelength periodicities to reveal any intrinsic pattern to the cell: the "living pulse". The "living pulse" hypothesized here is not to be mistaken with the circadian rhythm, which was not only found in eukaryotes [36] but has also been detected in cyanobacteria [37][38][39]. It is thought to be exhibited in cyanobacteria due to a selective advantage of cyanobacteria being adapted to the light-dark cycle [40]. Thus, a circadian rhythm is an organism s response to environmental cycles in contrast to the "living pulse", which is thought to be an inherent rhythmic pattern to a microbial organism. A circadian rhythm may also exist in the purple non-sulfur bacterium Rhodopseudomonas palustris and in Bacillus subtilis based on gene expression patterns [41,42] and has also been proposed for the human microbiome [43], but it is unclear which or whether all microbes have a circadian rhythm. While the circadian rhythm is unrelated to the "living pulse" being an adaptation to environmental cues, the "living pulse" may be more pronounced during times of higher activity, such as during the light cycle in cyanobacteria. Even if the detection range is not achieved by the above methodology alone, there are additional options to enhance the potential signal. First, microfluidic platforms could be used to separate single cells and coat them with a hydrogel matrix or to fix them with optical laser tweezers. Cells used for initial trials would not have a cell wall, which might dampen the signal. Moreover, if the ion channels are determined to be a significant contributor to the overall signal, then genetic modifications of the tested species through evolutionary generation experiments might be warranted to maximize the number of ion channels in a specific tested species. Another, probably easier, approach is to enhance or amplify the signal by using stimulants such as L-serine [44,45]. Alternatively, other stimulants such as heat, oxygen, or light could be used to increase the signal strength. One challenge will be to distinguish the hypothetical "living pulse" from environmental noise. There are many processes that could lead to environmental noise. They Even if the detection range is not achieved by the above methodology alone, there are additional options to enhance the potential signal. First, microfluidic platforms could be used to separate single cells and coat them with a hydrogel matrix or to fix them with optical laser tweezers. Cells used for initial trials would not have a cell wall, which might dampen the signal. Moreover, if the ion channels are determined to be a significant contributor to the overall signal, then genetic modifications of the tested species through evolutionary generation experiments might be warranted to maximize the number of ion channels in a specific tested species. Another, probably easier, approach is to enhance or amplify the signal by using stimulants such as L-serine [44,45]. Alternatively, other stimulants such as heat, oxygen, or light could be used to increase the signal strength. One challenge will be to distinguish the hypothetical "living pulse" from environmental noise. There are many processes that could lead to environmental noise. They include for example chemical concentration gradients, physical disturbances, and even the interaction of one organism with another one. However, all environmental noises have the commonality that they originate from outside the cells. Thus, the direction of the rhythmic pattern can be used as a distinguishing marker. If the pattern is detected moving from the inside of the cell to the outside, we interpret it to be the "living pulse", because environmental noise would travel from the outside of the cell to the inside. If there is a periodic pattern, it can be detected in a controlled environment where abiotic periodicities are either absent or known. While we hypothesize that all living microbes will exhibit a "living pulse", we expect the frequency and the magnitude to be different depending on the species just as is the case for animals. The Significance of Detecting a "Living Pulse" If the hypothesized "living pulse" can be detected, it would have far-reaching applications. While most of Earth s surface areas are populated by microbes, there are extreme environments where this is questionable. This includes areas in the hyperarid Atacama Desert [46,47], the Don Juan Pond in Antarctica [48,49], the Dallol Geothermal Area in Ethiopia [50,51], and newly created volcanic landscapes [52]. The "living pulse" would also be an ideal tool to determine whether life exists on an extraterrestrial body. The Viking life detection experiments conducted on Mars, the only life detection experiments ever conducted on an extraterrestrial body, are underlining this problem as it still has not been resolved whether life was actually detected or not [53][54][55]. Moreover, given concurrent missions to Mars and especially given the expected sample return missions from Mars to Earth by both NASA [56] and China [57] in the early 2030s, a universal biosignature independently of a life form s specific biochemistry is urgently needed to satisfy planetary protection concerns. This is particular important for backward contamination in order to safeguard Earth s biosphere. Furthermore, there are locations and places where we do not want life to be present and the "living pulse" could be used to verify that. Examples are on surgery tables during medical procedures, including the instruments utilized, and during food processing. The detection of Deinococcus radiodurans, which was discovered because it survived the application of high doses of gamma radiation to sterilize canned food [58], shows that sterile conditions cannot be guaranteed even if sterilizing stressors are applied. Conclusions While the existence of a "living pulse" in microorganisms remains unexplored, if such a signal can be detected, it will have profound consequences as a universal biosignature independent of a microorganism s biochemistry. It would be an invaluable tool for us to find life in extreme environments on Earth and in extraterrestrial environments beyond Earth, including when enforcing planetary protection protocols. The detection of this physical property of life could also have important implications on Earth, such as in the detection of viable microorganisms in the medical field and in food processing. Author Contributions: The authors contributed in equal parts to this opinion/hypothesis paper. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2024-06-29T15:10:22.485Z
2024-06-27T00:00:00.000
270805040
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "004b2d49ef608fb85ad3bc8264e668770a3fd7ce", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1376", "s2fieldsofstudy": [ "Medicine", "Geography" ], "sha1": "d048cb8e377c6475ff2977c8e9a167948b4e3827", "year": 2024 }
pes2o/s2orc
Geographic disparities and temporal changes of diabetes-related mortality risks in Florida: a retrospective study Background Over the last few decades, diabetes-related mortality risks (DRMR) have increased in Florida. Although there is evidence of geographic disparities in pre-diabetes and diabetes prevalence, little is known about disparities of DRMR in Florida. Understanding these disparities is important for guiding control programs and allocating health resources to communities most at need. Therefore, the objective of this study was to investigate geographic disparities and temporal changes of DRMR in Florida. Methods Retrospective mortality data for deaths that occurred from 2010 to 2019 were obtained from the Florida Department of Health. Tenth International Classification of Disease codes E10–E14 were used to identify diabetes-related deaths. County-level mortality risks were computed and presented as number of deaths per 100,000 persons. Spatial Empirical Bayesian (SEB) smoothing was performed to adjust for spatial autocorrelation and the small number problem. High-risk spatial clusters of DRMR were identified using Tango’s flexible spatial scan statistics. Geographic distribution and high-risk mortality clusters were displayed using ArcGIS, whereas seasonal patterns were visually represented in Excel. Results A total of 54,684 deaths were reported during the study period. There was an increasing temporal trend as well as seasonal patterns in diabetes mortality risks with high risks occurring during the winter. The highest mortality risk (8.1 per 100,000 persons) was recorded during the winter of 2018, while the lowest (6.1 per 100,000 persons) was in the fall of 2010. County-level SEB smoothed mortality risks varied by geographic location, ranging from 12.6 to 81.1 deaths per 100,000 persons. Counties in the northern and central parts of the state tended to have high mortality risks, whereas southern counties consistently showed low mortality risks. Similar to the geographic distribution of DRMR, significant high-risk spatial clusters were also identified in the central and northern parts of Florida. Conclusion Geographic disparities of DRMR exist in Florida, with high-risk spatial clusters being observed in rural central and northern areas of the state. There is also evidence of both increasing temporal trends and Winter peaks of DRMR. These findings are helpful for guiding allocation of resources to control the disease, reduce disparities, and improve population health. INTRODUCTION Diabetes is a chronic metabolic disease affecting millions of people worldwide and its prevalence has been increasing among the adult population of the United States (US) over the past few decades (Centers for Disease Control and Prevention, 2003a).According to the Centers for Disease Control and Prevention, a total of 28.7 million Americans have been diagnosed with diabetes and 8.5 million people are living with undiagnosed diabetes (Centers for Disease Control and Prevention, 2022b). Diabetes is associated with several life-threatening conditions, including chronic kidney disease, cardiovascular disease, stroke, retinopathy, and visual impairment (Centers for Disease Control andPrevention, 2022a, 2022b).As a result, people with diabetes are at a higher risk of death compared to those without the condition.As of 2021, the Diabetes Related Mortality Risk (DRMR) in the US was 31.1 per 100,000 persons, and the disease is reported to be the 8th leading cause of death in the country (Centers for Disease Control and Prevention, 2023b; Xu et al., 2022).The condition imposes a significant economic burden, as the average medical expenses of patients with diabetes are 2.3 times higher than for those who do not have the disease (Florida Department of Health, 2017).In 2017, Florida's total diabetes related costs was approximately $25 billion, with $19.3 billion being direct costs and another $5.5 billion being indirect costs (Florida Department of Health, 2017). There is evidence of geographic disparities of diabetes in the US.The diabetes belt, which includes fifteen states of the Southeast US (including Florida), has higher diabetes prevalence (≥11.0%)compared to the nation's average (8.5%) (Barker et al., 2011).Every year an estimated 148,613 people are diagnosed with diabetes in Florida (Florida Department of Health, 2017), and as of 2021, the DRMR was 24.8 per 100,000 persons (Centers for Disease Control and Prevention, 2022c).There has been a significant upward trend in DRMR in the US over the last few decades (US Department of Health and Human Services, 2020) and these temporal changes show evidence of regional disparities (US Department of Health and Human Services, 2020) as certain states and communities have experienced higher mortality risks than others.Moreover, the risk of diabetes increases with age and Florida has the 2 nd largest proportion of seniors (i.e.individuals >65 years old).Addressing geographic disparities of DRMR is important in providing useful information to guide efforts to reduce disparities and improve population health.Therefore, the objective of this study was to identify geographic disparities and temporal changes of DRMR in Florida. Ethics approval This study was approved by the University of Tennessee, Knoxville Institutional Review Board (Number: UTK IRB-23-07809-XM). Study area This retrospective study was conducted in the state of Florida, which comprises 67 counties (Fig. 1), some of which lie within the diabetes belt (Barker et al., 2011).Geographically, the state is located between 27 66′ N and 81 52′ W and spans 65,758 square miles, ranking 22nd by area among the 50 states of the US.As of 2020, 21.5 million people live in Florida, of whom 50.8% are female and 49.2% are male (United States Census Bureau, 2023a).The age distribution among the adult population is as follows: 24% are 18-34 years old, 26% are 35-49 years old, 25% are 50-64 years old, and 22% ≥ 65 years old.The overall racial composition of Florida residents is 76.9% White, 17% Black, and 6.1% all other races (United States Census Bureau, 2023a).There are 71.2%non-Hispanic, and 28.73% Hispanic population.Miami-Dade County, which is located in the Southeastern part of the state, is the most populous with 2.6 million people, whereas Liberty County is the least populated with 7,987 residents (United States Census Bureau, 2023a).Counties with population densities of ≤100 persons per square mile are classified as rural county, while those with higher population densities are classified as urban county (Florida Department of Health, 2023a).Based on this classification, there are 30 rural and 37 urban counties in Florida (Fig. 1). Data source and management The 2010-2019 individual-level death data were obtained from the Florida Department of Health (2023b).The cause of death was recorded using the 10th revision of the International Classification of Disease (World Health Organization, 2023), and the codes E10-E14 were used to identify diabetes-related deaths (World Health Organization, 2023). No differentiation was made between Type 1 and Type 2 diabetes.The number of diabetes-related deaths were aggregated at the county level using R statistical software version 4.2.2 (R Core Team, 2023).County-level DRMR were then calculated and expressed as number of deaths per 100,000 persons.Population estimates for 2010 to 2019 were obtained from the American Community Survey and used as the denominators to calculate the county-level DRMR from 2010 to 2019 time periods (United States Census Bureau, 2022).Cartographic boundary files were downloaded from the United States Census Bureau TIGER Geodatabase (United States Census Bureau, 2023b) and used for the spatial displays at the county level.GeoDa software version 1.14 (Anselin, 2023) was used to compute the county-level Spatial Empirical Bayesian (SEB) smoothed DRMR to adjust for spatial autocorrelation and small number of cases in some counties (Bernardinelli & Montomoli, 1992;Khan et al., 2023;Haddow & Odoi, 2009). Detection of spatial clusters Evidence of spatial autocorrelation of DRMR was assessed using global Moran's I statistic specifying 1 st order queen weights and implemented in GeoDa (Anselin, 2023).Tango's Flexible Spatial Scan Statistics (FSSS) was used in FleXScan software version 3.1.2(Takahashi, Yokoyama & Tango, 2010) to identify statistically significant irregularly shaped and circular high risk spatial clusters of DRMR (Tango & Takahashi, 2005).The maximum cluster size of 15 counties was specified as spatial scanning window to avoid the identification of excessively large clusters.Poisson probability model with a restricted log likelihood ratio (LLR) was utilized (Tango & Takahashi, 2012).Nine hundred and ninety-nine Monte Carlo replications and a critical p-value of 0.05 were used for significance testing to identify statistically significant high-risk clusters.Potential clusters were then ranked based on their restricted LLR.The cluster with highest restricted LLR value was designated as the primary cluster and the rest were considered secondary clusters.Only high-risk clusters with relative risk of ≥1.20 were reported in this study in order to avoid reporting low risk clusters.To assess if the spatial distribution of the clusters changed between the beginning of the study period and the end, two cluster analyses were performed: one for the first 2 years (2010-2011) and another for the last 2 years (2018-2019) of the study period.The spatial distribution of the clusters were then compared visually. Identification of temporal changes The descriptive statistics of DRMR were calculated using R statistical software version 4.2.2 (R Core Team, 2023) implemented in RStudio (Rstudio Team, 2023).The number of diabetes associated deaths were aggregated by season (winter: December, January, February; spring: March, April, May; summer: June, July, August; and fall: September, October, November) to identify the temporal changes of DRMR for the period of 2010 to 2019.Seasonal DRMR were computed and expressed as number of deaths per 100,000 persons.The temporal changes in DRMR over the 10-year study period were displayed graphically in Microsoft Excel (Microsoft, Redmond, WA, USA). Cartographic displays The cartographic displays were performed using ArcGIS version 10.8.1 (Environmental Systems Research Institute, 2023).County-level choropleth maps were used to visualize the distribution of both smoothed and unsmoothed DRMR using Jenk's optimization classifications scheme (Jenks, 1967).Choropleth maps, using Jenk's optimization classification scheme, were generated for each of the 10 years of the study period.Identified high-risk spatial clusters of DRMR were also displayed in ArcGIS. Geographic distribution of DRMR A total of 54,684 diabetes-related deaths were reported in Florida during the study period. The spatial patterns of the SEB smoothed maps were more apparent than those of the unsmoothed maps (Figs. 2 and 3).The county-level SEB DRMR varied geographically and ranged from 12.6 to 81.1 deaths per 100,000 persons across the state.The lowest mortality risk was observed in Leon County (12.6 deaths per 100,000 persons in 2015), while the highest risk was in Taylor County (81.1 per 100,000 persons in 2018).The northern and central counties of Florida tended to have high mortality risks during 2010-2011 and 2016-2019 time periods (Fig. 3).High DRMRs were observed predominantly in rural counties (Fig. 4).Specifically, the highest mortality risk (58.7 deaths per 100,000 persons) was recorded in northern rural Taylor County, while urban Leon County had the lowest (18.6 deaths per 100,000 persons) mortality risks.The rural counties neighboring Taylor County (i.e.Lafayette, Madison, Suwannee, Hamilton, Columbia, and Highlands counties) had similarly high mortality risks (Figs. 1 and 4).Although a few southern urban counties (including Desoto, Hardee, Highlands, and Glades counties) had high mortality risks, most of the the urban counties in the southern parts of the state had low mortality risks (Figs. 1 and 4). Purely spatial clusters of DRMR There was evidence of global spatial autocorrelarion of DRMR (Morans I = 0.430; p < 0.001).Significant high-risk spatial clusters were detected in the central and northern parts of the state (Fig. 5).Six and seven clusters were identified during the 2010-2011 and 2018-2019 time periods, respectively ( Temporal changes of DRMR There was an increasing temporal trend and seasonal patterns in DRMR from 2010 to 2019 (Fig. 6).The risks were consistently high during the winter season and low during the fall season (Fig. 6).The lowest DRMR was recorded during the fall season of 2010 (6.08 deaths per 100,000 persons) while the highest was recorded during the winter of 2018 which had a risk of 8.13 deaths per 100,000 persons. DISCUSSION This study investigated geographic disparities and temporal patterns of county-level DRMR in Florida.The findings of this study will be useful for guiding evidence-based health planning and resource allocation to reduce disparities and improve diabetes health outcomes by targeting high risk communities with control programs. Geographic distribution and clustering of DRMR There is evidence of geographic disparities of DRMR in Florida, with most of the high-risk counties being located in the rural northern and central parts of the state.Some previous studies also consistently reported that diabetes mortality remains a significant cause of concern in rural America, especially in the southern rural parts of the US (Callaghan et al., 2020;Brown-Guion et al., 2013;O'Brien & Denham, 2008).These rural areas experienced much higher DRMR compared to their urban counterparts (Callaghan et al., 2020), indicating a significant disparity between rural and urban areas.The lack of healthcare facilities in rural parts of Florida might be a reason for these disparities (Rural Health for Florida, 2023).Additionally, insufficient public transportation and long distances to healthcare facilities may also hinder the accessibility of healthcare services (O'Brien & Denham, 2008;Agency for Healthcare Research and Quality, 2023).Rural communities face challenges in attending regular health checkups, leading to lower detection rates of potential health problems, including diabetes, and subsequently increase the risks of diabetes-related deaths.Significant high-risk spatial clusters of DRMR exist in the northern and central parts of the state.The result is consistent with the previous findings, which reported that some counties in the north regions of Florida are located within the diabetes belt and have excess diabetes prevalence and diabetes-related complications (Barker et al., 2011;Ford et al., 2012).Geographic disparities in the Diabetes Self-Management Education program, which aims to educate diabetic patients on disease management, might be a reason behind these observed results.Some previous studies reported that the implementation of Diabetes Self-Management Education program remains low in northern counties of Florida despite these regions having high prevalence (Paul et al., 2018;Khan et al., 2021).As a result, residents of the counties of north Florida are not aware of diabetes and diabetes-associated complications, hence increasing the risks of death.Differences in socioeconomic conditions of the communities might be another reason for the observed cluster patterns.Most of the counties with below median household income are located in the northern and central regions of Florida (Florida Department of Health, 2023c).This disparity in income levels may result in limited access to essential health insurance, preventive care, proper nutrition, and physical activity (Loftus et al., 2017), directly or indirectly influencing diabetes and associated complications.Studies report that adults with diabetes living in societies of low-income brackets are at a two-fold higher risk of death compared with their affluent family counterparts (Saydah & Lochner, 2010;Mackenbach, 2012;Dalsgaard et al., 2015).It is also possible that geographic differences in comorbidities may play a role in the observed DRMR clustering.Previous research indicated that the northern parts of Florida exhibit significantly higher rates of comorbidities, including hypertension, heart disease, and kidney disease, compared to other regions within the state (Smith et al., 2018;Odoi et al., 2019).Comorbidities in individuals with diabetes can further increase the risk of mortality through their adverse effects on blood pressure, myocardial infarction, cancer risks, kidney function, and blood sugar control.Some previous studies already reported that cardiovascular disease is a strong predictor of diabetes-related mortality, and it was three to four times higher in patients with diabetes than those without diabetes (Khaw et al., 2004;Rawshani et al., 2017;Lu et al., 2021). Temporal pattern of DRMR The observed high mortality risks during the winter months, followed by a decline from summer to fall and a subsequent increase in the winter season throughout the study period, were comparable with reports from other previous studies (Zhang et al., 2021;Tseng et al., 2005).A recent study conducted by Zhang et al. (2021) reported that the mean level of fasting blood glucose is significantly higher during the winter than summer, which may contribute to an increased risk of diabetes-related complications and death.Additionally, hemoglobin A1c (glycosylated hemoglobin or A1c) which is an indicator of the past 90 days' average blood glucose level, is also associated with microvascular and macrovascular risk in diabetes patients (Teitelbaum et al., 1990), and fluctuation of A1c level can increase risk of death.Tseng et al. (2005) recently reported that A1c level was higher during the winter season and lower in summer with a difference of 0.22 A1c unit.A number of factors such as physiological, dietary, body mass index, and physical activity changes in the winter might be possible explanations for the observed seasonal pattern of DRMR.Physical inactivity and sedentary behavior have been reported to increase in winter months (Cepeda et al., 2018;Pivarnik, Reeves & Rafferty, 2003), which also increases high blood glucose as well as diabetes-related deaths. Strengths and Limitations This study used a rigorous spatial epidemiological approach to investigate the geographic distribution and spatial clusters of DRMR in Florida.Tango's FSSS is a robust method of identifying spatial clusters as it does not involve multiple comparisons, and it identifies both circular and irregularly shaped clusters.Moreover, this is the first study that has investigated geographic disparities of DRMR in Florida.However, this study has some limitations.Spatial patterns identified at the county levels may be different from those at lower spatial scales of analysis such as census tracts.Unfortunately, due to the small number of cases of diabetes deaths at the lower spatial scales, these analyses could not be conducted at these lower spatial sclaes.Tango's FSSS has low power for detecting circular clusters.Additionally, this study used surveillance data which might have some limitations in terms of case attainment and reporting.Finally, this study only investigated geographic disparities of DRMR at the county level; therefore, it was not possible to assess/identify lower-level disparities. CONCLUSIONS Geographic disparities in diabetes-related mortality risks exist in Florida, with notable high-risk clusters in the rural central and northern regions of the state.There was also evidence of increasing temporal trend and seasonal patterns of DRMR with the highest mortality risks being observed during the winter season.The study findings are useful for guiding health resource allocation targeting high risk communities so as to reduce health disparities in Florida.Future research will investigate predictors of the identified disparities.
v3-fos-license
2021-09-24T15:34:35.020Z
2021-08-30T00:00:00.000
237936589
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4418/11/9/1569/pdf", "pdf_hash": "4f60c8a6f26f823b351ccfddee0c62ec63d1ce1b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1377", "s2fieldsofstudy": [ "Medicine" ], "sha1": "afb2371c71d05ef8d845575359e5775be44c409b", "year": 2021 }
pes2o/s2orc
Structured Reporting of Lung Cancer Staging: A Consensus Proposal Background: Structured reporting (SR) in radiology is becoming necessary and has recently been recognized by major scientific societies. This study aimed to build CT-based structured reports for lung cancer during the staging phase, in order to improve communication between radiologists, members of the multidisciplinary team and patients. Materials and Methods: A panel of expert radiologists, members of the Italian Society of Medical and Interventional Radiology, was established. A modified Delphi exercise was used to build the structural report and to assess the level of agreement for all the report sections. The Cronbach’s alpha (Cα) correlation coefficient was used to assess internal consistency for each section and to perform a quality analysis according to the average inter-item correlation. Results: The final SR version was built by including 16 items in the “Patient Clinical Data” section, 4 items in the “Clinical Evaluation” section, 8 items in the “Exam Technique” section, 22 items in the “Report” section, and 5 items in the “Conclusion” section. Overall, 55 items were included in the final version of the SR. The overall mean of the scores of the experts and the sum of scores for the structured report were 4.5 (range 1–5) and 631 (mean value 67.54, STD 7.53), respectively, in the first round. The items of the structured report with higher accordance in the first round were primary lesion features, lymph nodes, metastasis and conclusions. The overall mean of the scores of the experts and the sum of scores for staging in the structured report were 4.7 (range 4–5) and 807 (mean value 70.11, STD 4.81), respectively, in the second round. The Cronbach’s alpha (Cα) correlation coefficient was 0.89 in the first round and 0.92 in the second round for staging in the structured report. Conclusions: The wide implementation of SR is critical for providing referring physicians and patients with the best quality of service, and for providing researchers with the best quality of data in the context of the big data exploitation of the available clinical data. Implementation is complex, requiring mature technology to successfully address pending user-friendliness, organizational and interoperability challenges. Introduction The American Recovery and Reinvestment Act and the Health Information Technology for Economic and Clinical Health Act have indicated that structuring data in health records will lead to an important improvement in patient outcomes [1,2]. Since the radiology report is part of the health record, the current format of free-text reporting (FTR) should be organized and shifted toward structured reporting (SR). The issue of whether all radiological examinations should contain a structured report, and if so, what the actual report structure should be, remains open [1][2][3]. According to the European Society of Radiology's (ESR) paper on SR in radiology [1], the three main reasons for moving from FTR to SR are quality, data quantification and accessibility. A critical quality improvement dimension resulting from the use of SR is standardization. The use of templates in SR provides a checklist as to whether all relevant items for a particular examination have been addressed. Thanks to this "structure", the radiology report will also allow the association of radiological data and other key clinical features, leading to a precise diagnosis and personalized medicine. With regards to accessibility, it is known that radiology reports are a rich source of data for research. This allows automated data mining, which may help to validate the relevance of imaging biomarkers by highlighting the clinical contexts in which they are most appropriate, and to devise potential new application domains. For this reason, radiology reports should be structured via their content, based on standard terminology, and should be accessible via standard access mechanisms and protocols. Weiss et al. have described three levels of SR [4]: 1. The first level is a structured format with paragraphs and subheadings. Currently, almost all radiology reports display this structure, with sections for clinical information, examination protocol and radiological findings, and a conclusion to highlight the most important findings. 2. The second level refers to consistent organization. For example, rectal cancer magnetic resonance imaging (MRI) describes all relevant features, such as tumor (T) stage, node (N) stage, anal sphincter complex involvement, tumor deposits in the mesorectal space, extramural vascular invasion, etc. 3. The third level directly addresses the consistent use of dedicated terminology, namely, standard language. Several proposals have been made by major International Societies of Radiology to support the use of SR [5][6][7][8][9][10]. The Italian Society of Medical and Interventional Radiology (SIRM) has created an Italian warehouse of SR templates, which can be freely accessed by all SIRM members, for the purpose of being routinely used in a clinical setting [11]. Despite these promising developments, SR has not yet been established in clinical routine. A survey of Italian radiologists found that the majority of those surveyed had heard of SR, but only a minority of them regularly used it in their clinical work [10]. Reasons for this include the current lack of usable templates and the minimal availability of software solutions for SR [10]. Lung cancer is the leading cause of cancer morbidity and mortality in men, whereas in women, it ranks third for incidence after breast and colorectal cancer, and second for mortality after breast cancer [12]. The incidence and mortality rates are roughly twice as high in men than in women, although the male-to-female ratio varies widely across regions. Lung cancer incidence and mortality rates are 3 to 4 times higher in transitioned countries than in transitioning countries; this pattern may well change as the tobacco epidemic evolves, given that 80% of smokers aged 15 years or older resided in low-income and middle-income countries in 2016 [12,13]. In the absence of symptoms to identify early lung cancer, screening high-risk individuals has the potential of shifting the diagnosis to earlier stages [14][15][16][17][18][19][20]. After more than 30 years of research, a large randomized controlled trial established that low-dose computed tomography (CT) improved mortality in patients at high risk for lung cancer. Subsequently, the majority of professional societies emphasize the importance of lung cancer screening. Although lung cancer screening is not unanimously recommended, the value of identifying early-stage lung cancer cannot be overstated. The majority of new cases of lung cancer present in advanced stages (III-IV), when a cure is unlikely or unattainable [21]. In this context, a disease-specific SR could be an effective tool for conveying all diagnostic imaging information needed for a correct lung cancer diagnosis and staging, while including clinical information required for personalized patient management. The aim of the present study was to propose an SR template that can guide radiologists in the systematic reporting of CT examinations for lung cancer staging, in order to improve communication between radiologists and clinicians, particularly in non-referral centers. Panel Expert As a result of critical discussion between expert radiologists, a multi-round consensusbuilding Delphi exercise was carried out to develop a comprehensive and focused SR template for CT staging of patients with lung cancer. A SIRM radiologist expert in thoracic imaging created the first draft of the SR template for lung cancer staging CT examinations. A working team of 13 experts from the Italian College of Thoracic Radiologists and of Diagnostic Imaging in Oncology Radiologists from SIRM was established to iteratively revise the initial draft, with the aim of reaching a final consensus on SR. Selection of the Delphi Domains and Items All the experts reviewed the literature data on the main scientific databases (including Pubmed, Scopus and Google Scholar) from December 2000 to December 2020, in order to assess papers on lung cancer CT and radiological SR. The full text of the studies selected was reviewed by all members of the expert panel, and each of them developed and shared the list of Delphi items via email and/or teleconference. The SR was divided into five sections: (1) Patient Clinical Data, (2) Clinical Evaluation, (3) Exam Technique, (4) Report and (5) Conclusion. A dedicated section for key images was added as part of the report. 1. The "Patient Clinical Data" section included patient clinical data and previous or family history of malignancies, including previous lung cancer, risk factors or predisposing pathologies. In this section, the item of "Allergies" to drugs and contrast medium was included. 2. The "Clinical Evaluation" section included previous examination results, a genetic panel and clinical symptoms. 3. The "Exam Technique" section included data regarding the CT equipment used (including the number of detector rows and whether single or dual energy scans were performed) and information concerning reconstruction algorithm(s) and slice thickness. Data on the contrast protocol were also collected (including information regarding post-contrast acquisitions), as well as data concerning the contrast medium (such as contrast active principle, commercial name, volume, flow rate, iodine concentration, and ongoing adverse events). 4. The "Report" section included data regarding lung cancer location, morphology, margin sharpness, texture (e.g., solid, ground glass), contrast enhancement pattern, size, local invasion, tumor stage, node stage and metastatic stage, according to the Italian Association of Medical Oncology (AIOM) guidelines [22]. In this section, a dedicated subsection for other types of primary lung cancers was included. 5. The "Conclusion" section included diagnosis, TNM stage according to the 8th Edition of AJCC-UICC 2017 [23], annotations and comments. Two Delphi rounds were carried out [24]. During the first round, each panelist independently contributed to refining the SR draft by means of online meetings or email exchanges. The level of panelist agreement for each SR model was tested in the second Delphi round, using a Google Form questionnaire shared by email. Each expert made individual comments for each specific template part (i.e., patient clinical data, clinical evaluation, exam technique, report and conclusion, images) using a five-point Likert scale (1 = strongly disagree, 2 = slightly disagree, 3 = slightly agree; 4 = modestly agree, 5 = strongly agree). After the second Delphi round, the final version of the SR was generated on the dedicated Radiological Society of North America (RSNA) website (radreport.org), using a T-Rex template format in line with the IHE (Integrating the Healthcare Enterprise) and MRRT (Management of Radiology Report Templates) profiles, accessible as open-source software, with the technical support of Exprivia™. These determined both the format of the radiology report templates (using version 5 of the Hypertext Markup Language (HTML5)) and the transporting mechanism used to request, get back and stock these schedules [25]. The radiology report was structured using a series of "codified queries" integrated into the T-Rex editor's preselected sections [25]. Statistical Analysis All ratings of the panelists for each section were analyzed using descriptive statistics measuring the mean score, the standard deviation value (STD) and the sum of scores. A mean score of 3 was considered good and a score ≥4 excellent. To measure the internal consistency of the panelist ratings for each section of the report, a quality analysis based on the average inter-item correlation was carried out using the Cronbach's alpha (Cα) correlation coefficient [26,27]. The Cα test provides a measure of the internal consistency of a test or scale; it is expressed as a number between 0 and 1. Internal consistency describes the extent to which all the items in a test measure the same concept. The Cα correlation coefficient was determined after each round. The closer to 1.0 the Cα coefficient, the greater the internal consistency of the items in the scale. An alpha coefficient (α) > 0.9 was considered excellent, α > 0.8 good, α > 0.7 acceptable, α > 0.6 questionable, α > 0.5 poor, and α < 0.5 unacceptable. However, in the iterations, an α of 0.8 was considered to be a reasonable goal for internal reliability. The data analysis was carried out using Statistic Toolbox of Matlab (The MathWorks, Inc., Natick, MA, USA). Structured Report The final SR (Appendix A) version was built by including 16 items in the "Patient Clinical Data" section, 4 items in the "Clinical Evaluation" section, 8 items in the "Exam Technique" section, 22 items in the "Report" section, and 5 items in the "Conclusion" section. Overall, 55 items were included in the final version of the SR. In Appendix B, the first draft of the SR is illustrated. The results obtained during the first Delphi round are reported in Table 1, and those obtained after the second Delphi round in Table 2. In the final version of the SR, the following parameters were included: 1. In the "Exam technique" section, the equipment used, the number of detector rows and CT modality (i.e., single or dual energy), the reconstruction algorithm(s) used and contrast protocol; 2. In the "Report" section, the sites and the features of extrathoracic metastases were defined, identifying the target lesions in accordance with the Response Evaluation Criteria in Solid tumors (RECIST) 1.1 [28]. Consensus Agreement Tables 1 and 2 show the single scores and the sums of scores of the panelists for staging with the SR in the first and second rounds, respectively. In both the first and the second rounds, as reported in Tables 1 and 2, all sections received more than a good rating. The overall mean score of the experts (13 experts) and the sum of scores for staging with the SR were 4.5 (range 1-5) and 631 (mean value 67.54, STD 7.53) (Table 1), respectively, in the first round. The items of the SR with higher accordance in the first round were primary lesion features, lymph nodes, metastases and conclusions ( Table 1). The overall mean score of the experts (nine experts) and the sum of scores for staging with the SR were 4.7 (range 4-5) and 807 (mean value 70.11, STD 4.81) ( Table 2), respectively, in the second round. The overall mean score of the experts in the second round was higher than the overall mean score of the experts in the first round, with a lower standard deviation value demonstrating the higher agreement reached among the experts in the SR in this round. The items of the SR in the second round that had higher "each reader" accordance were exam data and pulmonary involvement in multiple sites ( Table 2). The Cronbach's alpha (Cα) correlation coefficient was 0.89 in the first round and 0.92 in the second round for staging with the SR. Discussion In the present study, the panel of experts demonstrated a high degree of agreement in defining the different items of the SR. After the second Delphi round, the panelists' mean score and the sum of scores related to the SR models were 4.7 (range 4-5) and 807 (mean value 70.11, STD 4.81), respectively. All sections received more than a good rating in the second Delphi round; however, the weakest sections were "Patient Clinical Data" and "Clinical Evaluation". Moreover, the Cα correlation coefficient reached 0.92 in the second round. The present SR is based on a multi-round consensus-building Delphi exercise performed to develop a comprehensive focus on the SR template for CT-based lung cancer staging, as a result of critical discussion between expert radiologists in thoracic and oncological imaging. This SR was based on a standardized terminology and structure, which are aspects required for adherence to diagnostic-therapeutic recommendations and for enrolment in clinical trials, thus reducing the ambiguity that may arise from non-conventional language, and enabling better communication between radiologists and clinicians [29][30][31][32][33]. Therefore, according to Weiss et al. [4], the present report is a third-level SR. Several sections are included in the present template: "Patient Clinical Data", "Clinical Evaluation", "Exam Technique", "Report" and "Conclusion". Some points should be evaluated for each of these sections. Regarding "Patient Clinical Data", this section included data regarding personal or family history of cancer, and exposure to different risk factors and any genetic mutations. Regarding predisposing diseases, the possibility of collecting data on Chronic Obstructive Pulmonary Disease (COPD) allows one to plan treatment tactics. COPD is generally defined as a chronic minimally reversible airflow obstruction based on spirometry (postbronchodilator forced expiratory volume in 1 second (FEV1)/forced vital capacity (FVC) less than 70%). COPD and lung cancer share common features, including their high mortality and common risk factors (such as smoking), some genetic background, environmental exposures, and underlying common inflammatory processes. A ratio of FEV1 to FVC less than 0.7 is generally used to define airflow obstruction; however, other indices (such as FEV1/FVC under the lower limit of normal criteria, and a predicted reduction of FEV1%) have also been considered indicative of airway obstruction. In addition to these three main factors, the timing of COPD diagnosis, the degree of airflow obstruction, and the severity of emphysema have also been reported to exert a remarkable effect on the significance of the impact of COPD and/or emphysema on lung cancer risk. Although, at present, no solid evidence is available to clearly distinguish the roles of airflow obstruction and emphysema in lung cancer development, it is certain that the highest lung cancer risk occurs when airflow obstruction and emphysema coexist [12][13][14]. Such a painstaking process of data collection was subject to some disagreement among the panelists due to the opinion that this process could slow down the normal workflow and was not considered to be easy to use. However, it is necessary to point out that all SR sections are independent from each other, so that the Patient Clinical Data and Clinical Evaluation sections are optional and may be filled in or not upon user choice, although they were conceived with the aim of creating databases. In fact, the possibility of collecting all these data could allow the creation of a large database, not only for epidemiological studies, but also in the highest conception of radiology, to lay the foundations for radiomics studies [34][35][36][37]. Radiology reports should be rich in data that could potentially be pooled, analyzed and correlated with patient outcomes, thereby assisting future clinical and imaging guidelines. However, the use of non-standardized terminology limits the capacity for data collection across multiple institutions. In addition, the lack of consistent data extractable from SR could hinder the development of computerized applications to assist in reporting. Natural language processing applications can help extract the data from the reports with variable terminology, allowing the compilation of standardized data, which could then be used to develop multi-institutional data registries, as well as in clinical and research analyses. Moreover, the possibility of combining genomic data and radiological features allows for developing models of radiogenomics-models that today represent the highest level of advanced-precision medicine processes [38][39][40][41]. The fact that the present SR can be included in the picture archiving and communication system (PACS) is an added value; therefore, it is only necessary to enter these data once upon first entry into the radiology department. With regard to the "Exam Technique" section, sharing the examination technique not only within one's own department, but also with the radiology departments of other centers, fulfills a dual purpose. On the one hand, it enables the standardization of CT protocols; on the other hand, it allows carrying out diagnostic accuracy studies among different centers in order to optimize CT protocols. For example, during follow-up, differences in CT acquisition parameters and segmentation algorithms are important factors that can lead to variability in volumetric measurements. Therefore, slice thickness and other protocol-related factors (such as the reconstruction kernel and field of view) should be kept constant for reliable measurements to be carried out. Although some software packages allow the customization of options (which changes density thresholds for segmentation), standardized parameters should exist between practices in order to keep these parameters homogenous and comparable. In the CT protocol optimization step, enhanced communication among different centers could theoretically lead to quality improvement by means of enhanced patient safety (e.g., by radiation dose reduction), contrast optimization, and image quality. With improved communication comes the sharing of knowledge and experience, along with the potential of reducing medical errors and improving clinical outcomes [42]. Some authors have reported that the use of a checklist could improve diagnostic accuracy [43][44][45]. In 2014, based on the results of several screening trials, the American College of Radiology (ACR) released version 1.0 of the Lung CT Screening Reporting and Data System (Lung-RADS) [46]. This is a standardized method of reporting with recommendations for the management of pulmonary nodules detected on CT for lung cancer screening. When utilized, it can reduce the false positive rate in lung cancer screening, without increasing the rates of false negatives [47,48]. Lung-RADS is now deeply embedded as a quality metric on which regulation and reimbursement is determined by the Centers for Medicare and Medicaid [49,50]. During the first 5 years of nationwide lung cancer screening, there was a significant accumulation of data and experience, with many opportunities for continued learning [49,50]. The present "Report" section was designed to report all the structural characteristics of the lesions, such as margins and density, as well as relationships with locoregional structures (e.g., the pleura), which allow correct staging, but could also impact the choice of a more suitable therapeutic treatment based on the individual patient. The advantages of SR over FTR include its standardized terminology and structure, aspects required for adherence to diagnostic-therapeutic recommendations and for enrolment in clinical trials. SR reduces the ambiguity that may arise from non-conventional language. However, it should be noted that SR templates usually include a free text box for reporting any additional data that cannot be embedded in the default template fields. The wide implementation of SR is critical for providing referring physicians and patients with the best quality of service, and for providing researchers with best quality data in the context of the big data exploitation of available clinical information [51][52][53][54]. Implementation is complex, requiring mature technology to successfully address pending user-friendliness, organizational and interoperability challenge (with particular regard to the adequate storage of data, and easy and adequate connections with PACS and post-processing software). Consequently, the introduction of SR should be seen as a comprehensive effort, affecting all domains of radiology [55][56][57][58]. Despite the promising results obtained, this study has some limitations. First, the panelists were all radiologists; therefore, a multidisciplinary approach is lacking. A multidisciplinary validation of SR would have been more appropriate. Second, the panelists were of the same nationality; contributions from experts from multiple countries would allow for broader sharing, and would increase the consistency of the SR. Finally, this study was not aimed at assessing the impact of SR on the management of patients with lung cancer. This issue will be discussed in forthcoming studies. Conclusions The wide implementation of SR is a critical point for providing referring physicians and patients with the best quality of service, and for providing researchers with the best quality of data in the context of the big data exploitation of the available clinical information. Implementation is complex, requiring mature technology to successfully address pending user-friendliness, organizational and interoperability challenges (specifically, the adequate storage of data, and the easy and adequate connection with PACS and post-processing software). Consequently, the introduction of SR should be seen as a comprehensive effort, affecting all domains of radiology. The authors have no conflict of interest to disclose. The authors confirm that the article is not under consideration for publication elsewhere. Each author participated sufficiently to take public responsibility for the content of the manuscript. FIELD DETAIL ADMITTED VALUES Significant key images [Images]
v3-fos-license
2017-04-03T05:12:22.850Z
2014-10-01T00:00:00.000
15305151
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/19/10/16024/pdf", "pdf_hash": "a02d39ac94efd00daf27226ee7839ada062e5bd0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1378", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "a02d39ac94efd00daf27226ee7839ada062e5bd0", "year": 2014 }
pes2o/s2orc
Epoxidized Vegetable Oils Plasticized Poly(lactic acid) Biocomposites: Mechanical, Thermal and Morphology Properties Plasticized poly(lactic acid) PLA with epoxidized vegetable oils (EVO) were prepared using a melt blending method to improve the ductility of PLA. The plasticization of the PLA with EVO lowers the Tg as well as cold-crystallization temperature. The tensile properties demonstrated that the addition of EVO to PLA led to an increase of elongation at break, but a decrease of tensile modulus. Plasticized PLA showed improvement in the elongation at break by 2058% and 4060% with the addition of 5 wt % epoxidized palm oil (EPO) and mixture of epoxidized palm oil and soybean oil (EPSO), respectively. An increase in the tensile strength was also observed in the plasticized PLA with 1 wt % EPO and EPSO. The use of EVO increases the mobility of the polymeric chains, thereby improving the flexibility and plastic deformation of PLA. The SEM micrograph of the plasticized PLA showed good compatible morphologies without voids resulting from good interfacial adhesion between PLA and EVO. Based on the results of this study, EVO may be used as an environmentally friendly plasticizer that can improve the overall properties of PLA. Introduction Over the past few decades, most polymers used are petrochemically derived products that are non-biodegradable. The increase in global environmental problems such as green house gas emission and diminishing fossil resources has focused more attention on the development of green polymer composites. One such type of composites are bio-based polymer composites, which are environmentally friendly, compostable, biodegradable, and are acquired from renewable and sustainable resources [1]. They reduce our dependency on depleting fossil fuels and the generation of hazardous substances. One of the most favorable materials for the production of high-performance, environmentally friendly biodegradable polymer is aliphatic polyester. Poly(lactic acid) (PLA), a linear aliphatic polyester, is regarded as most promising substitute for petroleum-based polymers due to its mechanical characteristics, such as tensile strength and Young's modulus, similar to those of polyethylene terephthalate (PET) or nylon [2]. Moreover, PLA has good potential due to its excellent properties, such as high mechanical strength, transparency, compostability, moderate barrier capability and safety. Despite these desirable features, the high brittleness of PLA limits its application [1]. Therefore, considerable efforts have been made to enhance the characteristics of the polymer by employing plasticizer. Plasticizer is typically present in a range between about 1% to 10% by weight of polymeric material. Below 1%, the plasticizer may not effectively plasticizes the polymeric material, and above 10%, it tends to leach out of the polymeric material [3]. Petroleum based plasticizer are standard compounding ingredients, however epoxidized vegetable oil-based plasticizer is employed as a feasible alternative [4]. Vegetable oils are derived from plants and are chemically composed of different triacylglycerols, i.e., esters of glycerol and fatty acids [5]. Vegetable oils are attractive raw materials for many industrial applications as they are derived from renewable resources, biodegradable, environmental friendly, easily available and produced in large quantities at a competitive cost [6]. Palm oil is a favorable vegetable oil because it is cheap, low in toxicity, and easily available as a sustainable agricultural resource. It comes from palm trees, one of the most economical perennial oil crops in Malaysia and belongs to the species Elaeis guineensis under the family Palmacea, originated in the tropical forests of West Africa. Epoxidized vegetable oils (EVO) are used extensively as plasticizers, stabilizers, and additives for many polymers [7]. Al-Mulla et al. studied the effects of epoxidized palm oil as plasticizer on the PLA/PCL blend prepared via the solution casting process [8]. The addition of epoxidized palm oil reduced the tensile strength and modulus but increased elongation at break for the PLA/PCL blend. The highest elongation at break was observed for the blend with 10 wt % epoxidized palm oil content. EVO are also a major raw material in the production of high functionality vegetable oil-based materials such as lubricants [9][10][11][12], alkyl nitrate triglyceride [13] and polyols [14,15]. In this study, two types of epoxidized vegetable oils (EVO), which are the epoxidized palm oil (EPO) and mixture of epoxidized palm oil and soybean oil (EPSO), are used as plasticizer to PLA via melt blending technique. The aim of this study was to investigate the effects of plasticizer loadings on the mechanical and thermal properties of PLA, as well as, to investigate the interaction between PLA and plasticizers. This material has great potential as alternatives to the conventionally used polymer such as polypropylene, as biodegradable or green biocomposites in the packaging industry. Fourier Transform Infrared (FTIR) Spectroscopy FTIR spectroscopy is used to monitor the absorption peak shift in specific regions to determine the known functional group interactions of the PLA with EVO. The FTIR spectra of PLA/EPO and PLA/EPSO are depicted in Figure 1a,b, respectively. The spectra show 4 main regions: -CH stretching at 3000-2850 cm −1 , C=O stretching at 1750-1745 cm −1 , C-H bending at 1500-1400 cm −1 and -C-O stretching at 1100-1000 cm −1 . The FTIR spectra of EPO and EPSO exhibited the unique characteristic peaks that corresponded to the C-O-C stretching from oxirane vibrations at 950-850 cm −1 and around 1250 cm −1 . The signal at 1250 cm −1 usually overlays with others, mainly -C-O which is present in oils. For the plasticized PLAs, the peaks at about 3500 cm −1 indicates the presence of the free O-H stretching vibration from the production of EPO via acid catalyzed with hydrogen peroxide (H2O2) [16]. During the epoxidation with peracids, the acid produced simultaneously proceeds along with the reversible reaction with hydrogen peroxide to generate peracid again and free water group. A small amount of hydroxyl group (O-H) in the biocomposite could be attributed to the possible terminal hydroxyl groups in the PLA main chain which was released during the interaction between PLA and EPO [17]. A small shift of C-O stretching peak from 1080 cm −1 (neat PLA) to 1085 cm −1 in both PLA/EPO and PLA/EPSO were observed. This shift in the absorption peak indicates the miscibility and interaction of PLA and EVO. The upward shift may possibly due to an interaction between the hydroxyl group of PLA and the epoxy group of EVO through hydrogen bonding. A proposed possible interaction between PLA and EVO is shown in Figure 2. The hydrogen bonding exists due the interaction between the polymer-plasticizer, PLA-EVO and is influenced by the epoxy content, also known as oxirane oxygen content (OOC) of the epoxidized oils. The OOC value shows the epoxy groups which exist in the plasticizer. It is essential for a good plasticizer to contain two types of structural components; the polar and non-polar components. The OOC represents the polar component other than the carbonyl group of carboxylic ester functionality. A higher OOC value of EPSO (3.58%) compared to EPO (3.23%) resembles stronger interaction between PLA and EPSO through hydrogen bonding compared to PLA and EPO. According to George Wypych, polar groups in a plasticizer improve mechanical properties and are essential for good compatibility [18]. Eventually, if the plasticizer used is very non-polar (low OOC value but high iodine value), it results in poor interaction and eventually less compatibility between polymer and plasticizer, as found with PLA/EPO, which leads to lower mechanical properties compared to PLA/EPSO. Mechanical Properties Tensile properties are the most frequently used indicator of change caused by plasticization. The addition of 1 wt % EVO plasticizers into PLA matrix significantly improves the tensile strength of PLA. PLA/1 wt % EPO and PLA/1 wt % EPSO show increments of approximately 5% and 11%, respectively as shown in Figure 3a, compared to neat PLA. The tensile strength of PLA decreases with the addition of plasticizers above 1 wt %. The drop in tensile strength may be caused by the formation of plasticizer-plasticizer interaction which dominates at higher EPO or EPSO contents, resulting in a phase separated structure. In addition, at higher plasticizer loading, only a part of the plasticizer was located in the interfacial area, while the remaining is spread in the matrix, influencing the homogeneity and causing the drop in the tensile strength of the plasticized PLA [19]. (c) after this point causes the decrease of elongation at break, making the biocomposite more brittle. With 5 wt % plasticizers loadings, PLA/EPO and PLA/EPSO displayed elongation at break of 114.4%, and 220.5% respectively. In general, plasticizer is introduced to a polymer matrix to overcome the brittleness caused by extensive intermolecular interactions. Thus, the presence of plasticizers EPO and EPSO decreases these intermolecular forces and enhances the mobility of PLA polymer chains, causing an increase in flexibility and extensibility of the PLA. Several theories have been proposed to explain the mechanism and action of plasticizers on polymers. Among those theories, lubricity theory and gel theory have been widely accepted to describe the effect of plasticizers on polymeric networks. Lubricity theory mentions that the plasticizer acts as a lubricant to reduce friction and facilitates polymer chain mobility past one another, consequently lowering deformation. Gel theory extends the lubricity theory and suggests that a plasticizer disrupts and replaces polymer-polymer interactions (hydrogen bonds, van der Waals or ionic forces, etc.) that hold polymer chains together resulting in reduction of the polymer gel structure and increased flexibility. Since the plasticizer plasticizes polymers, the typical expectation is that the tensile modulus of plasticized material decreases with increased amounts of plasticizer as shown in Figure 3c. The PLA exhibited a tensile modulus value of 1209 MPa. The addition of 10 wt % EPO and 10 wt % EPSO reduces the stiffness of PLA to 757 MPa and 841 MPa, respectively. This was attributed to the toughening and elastomeric effect of EPO and EPSO. EPO and EPSO, which contain the epoxy group, could form favorable interactions with PLA, presumably via hydrogen bonding as proposed in Figure 2. It should be noted that the tensile strength and elongation at break in PLA/EPSO is slightly higher compared to PLA/EPO. This is due to the interaction between polymer and plasticizer, which can be explained by the epoxy content also known as oxirane oxygen content (OCC) of the epoxidized oil. The OCC value indicates the epoxy groups present in the plasticizer. Higher OCC value in EPSO (3.58%) compared to EPO (3.23%) resembles stronger interaction (hydrogen bonding) between PLA and EPSO, which gives better tensile properties. Based on the result of elongation at break of plasticized PLAs, the PLA plasticized with 5 wt % EPO, 5 wt % EPSO showed the best results and were selected for further study and compared to pristine PLA. Dynamic Mechanical Analysis (DMA) Dynamic mechanical analysis (DMA) is a method in which the elastic and viscous response of a sample under oscillating load, are monitored as a function of temperature. DMA results are expressed by three parameters: the storage modulus (E'), the loss modulus (E''), and the tan δ (E''/E' ratio). During a heating at constant frequency, the storage modulus (E'), usually strongly decreases when temperature crosses the dynamic glass transition. On the other hand, the loss modulus (E''), and the loss factor (tan δ) exhibit a peaked shape. PLA is a semicrystalline material, and its storage modulus begins to decrease rapidly at 50 °C as the material enters its glass transition. Because of its crystalline properties, it displays a region of relative stability before its modulus plummets rapidly as its structure approaches the melting point. The DMA analysis for the plasticized PLA was carried out to see the effect of the plasticizers on the thermomechanical properties. Figure 4a-c illustrates the dynamic storage modulus, loss modulus and tan δ of PLA and plasticized PLAs, as a function of temperature, respectively. of tensile modulus values from tensile test. Besides, the plasticizer provides moderate toughening and elastomeric effect, which brings about a decrease in the modulus values. Reduction in storage modulus with respect to temperature is related to softening of the matrix at higher temperature. As the temperature exceeds the softening point, the mobility of the matrix chains increases, leading to the sharp decrease of modulus at temperatures between 50-60 °C [20]. Figure 4b shows the loss modulus of PLA and plasticized PLAs. The peak intensity of loss modulus curve signifies the melt viscosity of a polymer. As observed, the enhancement of loss modulus is higher in plasticized PLAs compared to neat PLA. This indicates that the incorporation of plasticizer increases the melt viscosity of PLA by acting as a good solvent or plasticizer [21]. This is attributable to the addition of EVO which increases the flowability by triggering the PLA polymer chains to align in the direction of flow, owing to a less rigid polymeric material [22]. PLA/EPSO shows higher melt viscosity due to improved interphase interaction between PLA and EPSO through hydrogen bonding, compared to PLA/EPO, and have higher ability to dissipate mechanical energy through molecular motion. The tan δ peak has been used to investigate the glass transition of semicrystalline polymer or polymeric networks. The temperature dependence of tan δ of PLA and plasticized PLAs are presented in Figure 4c. The tan δ curves show two dynamic relaxation peaks at 80-90 °C and 50-60 °C, which are referred to as α and β-relaxation peaks, respectively. It is assumed that the β-relaxation peak is linked to the breakage of the hydrogen bonding between polymer chains, inducing long-range segmental chain movement in the PLA matrix. Therefore, the β-relaxation peak at 50-60 °C was assigned to the glass transition temperature, Tg. The Tg of PLA is about 55 °C and the plasticized PLAs have slight shifts to lower Tg temperature, as shown in the Figure 4c. The glass transition temperatures were decreased slightly as a result of the plasticization effect. On the other hand, it is observed from Figure 4c that there is a slight increase in the intensity of the β-relaxation peak. Since the glass transition process is linked to the molecular motion, the Tg is considered to be influenced by molecular packing, chain rigidity and linearity as well. Since the intensity of the β-relaxation peak is associated to molecular mobility, it has been observed that the incorporation of plasticizers into the PLA polymer matrix increases their molecular mobility and in turn increases the intensity of the relaxation peak. Thermogravimetry Analysis (TGA) The thermal degradation behavior of the plasticized PLAs was studied with TGA. The TGA thermograms are shown in Figure 5. Thermal stability factors, including initial decomposition temperature (Tonset), temperature of maximum rate of degradation (Tmax) and decomposition temperature at 50% weight loss (T50) of the plasticized PLA, can be determined from the TGA thermograms. As observed in Figure 5, the decomposition curve behavior of the plasticized PLAs is largely similar to those of neat PLA and takes place in a single weight loss step. The Tonset, Tmax and T50 of the plasticized PLAs are tabulated in Table 1. It can be seen that the decomposition temperature of the plasticized PLAs commences near 300 °C and rapidly continues until 430 °C. The degradation onset temperatures of PLA/EPO and PLA/EPSO are higher than that of neat PLA. Neat PLA has an onset temperature of 274.26 °C, which is increased to 313.54 °C and 330.40 °C when 5 wt % of EPO and EPSO, respectively, are incorporated into PLA. From the results, it can be confirmed that the plasticized PLAs containing 5 wt % plasticizer show excellent thermal stabilities, which can be attributed to the excellent network structure in the biocomposites. For example, neat PLA has Tmax of 345.12 °C, which is increased to 379.79 °C and 396.34 °C when EPO and EPSO, respectively, are incorporated into the PLA. Differential Scanning Calorimetry (DSC) Differential Scanning Calorimetry (DSC) measures the amount of heat energy absorbed or released when the material is heated or cooled. For polymeric materials, which undergo important property changes near thermal transition, DSC is a very useful technique to study the glass transition temperature, crystallization temperature, and melting behavior. Pristine PLA showed an endothermic peak of melting, Tm = 149.79 °C. A minor decrease in the melting temperature of 3 to 4 °C was observed, indicating that the melting temperature of PLA was not greatly affected by the addition of EVO plasticizer. The pristine PLA showed a sharp Tg and its value decreased gradually with addition of EVO as shown in Figure 6. Tg decreased from 62.85 °C for pristine PLA to 60.12 °C and 60.79 °C when 5 wt % of EPO and EPSO respectively was added due to enhanced segmental mobility of PLA chains caused by the presence of EVO plasticizers. No trace of separate melting or crystallization of EVO was found confirming that the phase separation of EVO did not occur. Cold-crystallization was chosen as a crystallization method because it leads to a more intense spherulite nucleation resulting in shorter crystallization time and smaller spherulite sizes [23]. Pristine PLA showed cold-crystallization temperature at about 124.11 °C. The cold-crystallization temperature of PLA decreased with EVO addition, in parallel with the shift in Tg as shown in Figure 6. The cold-crystallization decreased to 114.16 °C and 108.97 °C for the biocomposite containing 5 wt % EPO and EPSO, respectively. The depression of Tcc and the decrease in Tg indicated that the EVO was compatible with PLA [1]. Enhanced chain mobility increased the rate of crystallization, which allowed PLA to crystallize at lower temperature. Furthermore, the crystallization peak of the biocomposites was narrowed due to increased ability of PLA to crystallize [24]. Scanning Electron Microscopy (SEM) Scanning electron microscopy (SEM) was employed to discern the surface morphology of the fractured tensile specimens and qualitatively illustrate the state of dispersion of the EVO in the polymer 4 Figure 7a, which exhibited a flat surface corresponding to brittle crack growth behavior [19]. The addition of EVO as plasticizer to PLA matrix determined a marked change in the morphology with improved interfacial adhesion and dispersion. SEM micrographs of PLA/5 wt % EPSO, Figure 7c, show very good compatible morphologies without edge, cavity, and holes compared to PLA/5 wt % EPO (Figure 7b). This incidentally means more transfer of loads under stress conditions, consistent with the enhanced tensile properties of the biocomposites. This phenomenon shows a good adhesion between the components with a diffused polymer-plasticizer interface, which is attributed to the occurrence of chemical interactions between PLA and EVO. Thus, the EVO was well-dispersed to form a homogeneous matrix with evident signs of plasticization in the PLA matrix, without separation at the interface producing single phase morphology [25]. Materials Poly(lactic acid) resin, commercial grade 4042D, Mw ~ 390,000 Da, was supplied by NatureWorks ® LCC, Minnetonka, MN, USA. Epoxidized palm oil (EPO) and mixture of epoxidized palm oil and soybean oil (EPSO) supplied by Advanced Oleochemical Technology Division (AOTD), Malaysian Palm Oil Board (MPOB, Kajang, Malaysia). The characteristics of the EPO obtained are listed in Table 2. Preparation of PLA/EVO Biocomposites The PLA/EVO biocomposites were prepared by melt blending technique using Brabender Internal Mixer at 170 °C with 50 rpm of the rotor speed. The plasticizer was added after 2 min of blending PLA and continues for another 8 min. The weight of EVO studied was varied from 0 to 10 wt %. The biocomposites obtained were then molded into sheets of 1 mm in thickness by hot pressing at 165 °C for 10 min with pressure of 110 kg/cm 2 , followed by cooling to room temperature. The sheets were used for further characterization. Fourier Transform Infrared (FTIR) Spectra The FTIR spectra of biocomposites and raw materials were recorded using a Fourier Transform Infrared Spectrometer (Perkin-Elmer: Model 1000 Series) instrument equipped with a universal attenuated total reflectance (UATR) accessory. The spectra were recorded between 4000 cm −1 and 280 cm −1 frequency ranges. The data were analyzed using the program FTIR Spectrum Software (Perkin Elmer). Tensile Properties Measurement Tensile properties were tested using Instron 4302 series IX (Buckinghamshire, UK). The samples were cut into dumbbell shapes following ASTM D638 (type V) standard. A load of 1.0 kN was applied at constant crosshead speed of 10 mm/min at room temperature. The Tensile strength, tensile modulus and elongation at break were evaluated from the stress-strain data. Each sample included seven tested replicates to obtain a reliable mean and standard deviation. Dynamic Mechanical Analysis Thermal dynamic analysis (DMA) was performed according to ASTM D5023 on a dynamic mechanical analyzer (Perkin-Elmer PYRIS Diamond DMA), using bending mode. The temperature scan was from the ambient temperature (25 °C) to 150 °C at a constant heating rate of 2 °C/min and the frequency of dynamic force of 1 Hz, under nitrogen atmosphere. The storage modulus (E'), loss modulus (E''), loss factor (tan δ), and glass transition temperature of each specimen were obtained as a function of temperature. Thermal Properties Differential Scanning Calorimetry (DSC) analysis was performed by Perkin Elmer JADE DSC to study the nonisothermal crystallization kinetics. The DCS procedure consisted of three steps. At the first step, the films were heated from 30 to 180 °C with a heating rate of 10 °C/min. Then, they were held at this temperature for 5 min to eliminate the thermal history, and they were cooled to 30 °C at a cooling rate of 10 °C/min and held at 30 °C for 5 min. In the last step, they were reheated to 180 °C at a heating rate of 10 °C/min. Thermogravimetric analysis (TGA) was carried out using a Perkin Elmer Pyris 7 TGA analyzer with scan range from 35° to 800° at a constant heating rate of 10 °C/min and continuous nitrogen flow. The thermal degradation temperatures taken into account were the temperature at onset (Tonset), the temperature of maximum weight loss (Tmax) and temperature at 50% weight loss (T50). Morphology The fracture surfaces of tensile failed sample were studied under a JEOL scanning electron microscopy (SEM) instrument JSM-6400 (Tokyo, Japan) at an accelerating voltage of 30 kV. The fractured surfaces were coated with a thin layer of gold prior to observation. Conclusions PLA plasticized with different epoxidized vegetable oils (EVO) have been successfully prepared. Tensile test showed that the optimum improvement in the mechanical properties was achieved when 5 wt % of EVO was introduced into the PLA matrix. Furthermore, SEM analysis revealed better miscibility and interfacial adhesion between PLA and 5 wt % EVO. Consequently, the main goal to improve the flexibility of PLA was achieved. These findings support that the EVO can be used as an excellent plasticizer, which increases the interaction at the phase boundaries and overall properties. Further, PLA/EVO biocomposites can be used as biodegradable or green composite alternatives to the conventionally used polymers, such as polypropylene.
v3-fos-license
2022-05-10T16:05:45.057Z
2022-01-01T00:00:00.000
248656084
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.5935/0103-507x.20220010-en", "pdf_hash": "d205e4f3963f534e99ff7ed7955fd82b3d860fb9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1379", "s2fieldsofstudy": [ "Medicine" ], "sha1": "be96aead7bb3fa75e47c7e9353a91992d1c4e81f", "year": 2022 }
pes2o/s2orc
Prognostic value of hyperlactatemia in infected patients admitted to intensive care units: a multicenter study Objective: To evaluate the influence of patient characteristics on hyperlactatemia in an infected population admitted to intensive care units and the influence of hyperlactatemia severity on hospital mortality. Methods: A post hoc analysis of hyperlactatemia in the INFAUCI study, a national prospective, observational, multicenter study, was conducted in 14 Portuguese intensive care units. Infected patients admitted to intensive care units with a lactate measurement in the first 12 hours of admission were selected. Sepsis was identified according to the Sepsis-2 definition accepted at the time of data collection. The severity of hyperlactatemia was classified as mild (2 - 3.9mmol/L), moderate (4.0 - 9.9mmol/L) or severe (> 10mmol/L). Results: In a total of 1,640 patients infected on admission, hyperlactatemia occurred in 934 patients (57%), classified as mild, moderate and severe in 57.0%, 34.4% and 8.7% of patients, respectively. The presence of hyperlactatemia and a higher degree of hyperlactatemia were both associated with a higher Simplified Acute Physiology Score II, a higher Charlson Comorbidity Index and the presence of septic shock. The lactate Receiver Operating Characteristic curve for hospital mortality had an area under the curve of 0.64 (95%CI 0.61 - 0.72), which increased to 0.71 (95%CI 0.68 - 0.74) when combined with Sequential Organ Failure Assessment score. In-hospital mortality with other covariates adjusted by Simplified Acute Physiology Score II was associated with moderate and severe hyperlactatemia, with odds ratio of 1.95 (95%CI 1.4 - 2.7; p < 0.001) and 4.54 (95%CI 2.4 - 8.5; p < 0.001), respectively. Conclusion: Blood lactate levels correlate independently with in-hospital mortality for moderate and severe degrees of hyperlactatemia. Objective: To evaluate the influence of patient characteristics on hyperlactatemia in an infected population admitted to intensive care units and the influence of hyperlactatemia severity on hospital mortality. Methods: A post hoc analysis of hyperlactatemia in the INFAUCI study, a national prospective, observational, multicenter study, was conducted in 14 Portuguese intensive care units. Infected patients admitted to intensive care units with a lactate measurement in the first 12 hours of admission were selected. Sepsis was identified according to the Sepsis-2 definition accepted at the time of data collection. The severity of hyperlactatemia was classified as mild (2 -3.9mmol/L), moderate (4.0 -9.9mmol/L) or severe (> 10mmol/L). Results: In a total of 1,640 patients infected on admission, hyperlactatemia occurred in 934 patients (57%), classified as mild, moderate and severe in 57.0%, 34.4% and 8.7% of patients, respectively. The presence of Conflicts of interest: None. Submitted on November 8, 2020 Accepted on October 30, 2021 Previous studies investigated the impact of hyperlactatemia on admission mortality in patients admitted to the intensive care unit (ICU) and reported cumulative prevalence rates varying from 10 to 91%. (5) On ICU admission, a higher lactate concentration within the normal reference range (relative hyperlactatemia) has been shown to be an independent predictor of hospital mortality in critically ill patients. (6,7) Lactate levels between 2.0 -3.9mmol/L in patients with suspected infection were associated with significant mortality, even in the absence of hypotension. (8) Haas et al. (9) found that the degree of hyperlactatemia was directly related to the severity of shock and to the mortality rate, which reached 80% in patients with lactate levels >10mmol/L. Although hyperlactatemia on ICU admission has been shown to be a good prognostic marker, dynamic changes in lactate concentration have also proven to be independent predictive values. (10) In fact, lactate level and lactate clearance were both useful targets in patients with suspected infection (11) or septic shock defined by Sepsis-3 (12) in the emergency department, and in a recent study, 6-hour lactate level was more accurate than 6-hour lactate clearance in predicting 30-day mortality. (13) Many factors confound the clinical use of lactate level. The most common in clinical practice are the use of catecholamines in septic shock patients, alkalosisinduced increases in glucose metabolism, lactate-buffered continuous hemofiltration, liver dysfunction, and lung lactate production. (14) Medical literature is limited regarding how patient and disease characteristics may influence blood lactate values and whether those factors are determinants of outcomes. The aims of this study were to evaluate the influence of patient characteristics on the presence of hyperlactatemia in an infected population on ICU admission and determine differences in outcomes and to investigate the association between the severity of hyperlactatemia and mortality. Study protocol We carried out a post hoc analysis of the Infection on Admission to the ICU (INFAUCI) study, which was an observational, multicenter, prospective cohort study conducted in 14 Portuguese ICUs with data collected between 1st May 2009 and 31 December 2010. (15) The study protocol was described elsewhere. (15) Briefly, all adult patients (age ≥ 18 years) consecutively admitted during one year to one of the participating units were enrolled and followed until death or 6 months after ICU admission. The Hospital Research and Ethics Committee of Centro Hospitalar São João approved the study design. Informed consent was waived due to the observational nature of the study. For the purpose of this study, we analyzed arterial blood lactate levels on ICU admission in infected patients. The highest value of lactate within the first 12 hours of admission was recorded. Infection and sepsis criteria were identified at the time of admission to the ICU according to commonly used definitions accepted at the time of data collection, the Sepsis-2 definition (16) According to this consensus, septic shock is defined as a state of acute circulatory failure characterized by persistent arterial hypotension unexplained by other causes. Data were collected on patient demographic and clinical characteristics, such as sex, age, Simplified Acute Physiology Score II (SAPS II), Sequential Organ Failure Assessment (SOFA) score, Charlson Comorbidity Index score, (17) comorbidities, functional status, origin and diagnosis on admission, ICU length of stay (LOS) and hospital LOS. Primary outcome was in-hospital mortality. Arterial blood lactate was measured using a blood gas analyzer. Statistical analysis Continuous variables SAPS II, SOFA score, Charlson Comorbidity Index score, ICU LOS and hospital LOS were dichotomized around the mean/median values found for all populations. (15) Age was categorized into two groups: less than 65 years and 65 years and higher. A cutoff value of 2mmol/L was used to define hyperlactatemia. Hyperlactatemia was categorized into three groups according to severity: mild (2.0 -3.9mmol/L), moderate (4.0 -9.9mmol/L) and severe (>10.0mmol/L). Categorical variables were described as absolute and relative frequencies, and continuous variables were expressed as median (percentile -P25 -P75) or mean ± standard deviation, according to data distribution. Comparisons between groups were performed with t-tests for independent samples, Mann-Whitney U tests or Kruskal-Wallis tests for continuous variables and Chisquared tests for categorical variables, as appropriate. Logistic regression was applied, and patient demographic and clinical characteristics were included in the univariate analysis considering the categories previously established: age (< 65, ≥ 65); SAPS II score (< 45, ≥ 45); SOFA score on admission (< 7, ≥ 7); Charlson Comorbidity Index score (< 4, ≥ 4); comorbidities (no/yes); infection source (pneumonia, tracheobronchitis, endovascular, intraabdominal, skin and soft tissue, urological, neurological, other); diagnosis on admission (medical, elective surgery, emergency surgery, trauma); septic shock (no/yes); bacteremia (no, primary, secondary) and hyperlactatemia (< 2; 2 -3.9; 4 -9.9, ≥ 10). All variables with p < 0.05 in the univariate analysis were included in the final multivariate regression modeling (enter method). The associations between patient characteristics and primary outcome were assessed by the odds ratio (OR) with a 95% confidence interval (95%CI) estimated by the multivariate models developed, and goodness-of-fit was assessed by the Hosmer-Lemeshow statistic and test. Two models were fitted adjusting for other covariates by either the SAPS II or the SOFA score. Age, chronic liver disease, chronic respiratory disease and cancer were not included in the multivariate analysis concerning collinearity with the Charlson Comorbidity Index. The Receiver Operating Characteristic (ROC) area under the curve (AUC) was used to evaluate the ability of blood lactate to predict in-hospital mortality, and ROC curves were compared using 95%CIs for the AUC. All reported p-values were two-sided, and the significance level was set at 5%. Data were statistically analyzed using IBM Statistical Package for the Social Science (SPSS),® version 24.0, software (IBM Corp., Armonk, NY, USA). Population characteristics A total of 3,766 patients admitted consecutively were included in the INFAUCI study. Blood lactate was measured on admission in 3,259 patients, with 1,640 patients being included in the infected group on ICU admission ( Figure 1). The median blood lactate level was 2.15mmol/L and significantly increased with age, SAPS II score, SOFA score on admission and Charlson Comorbidity Index score (Table 1S -Supplementary material). In regard to patients' comorbidities, significantly higher blood lactate levels were observed in patients with chronic liver disease, immunosuppression and cancer. Septic shock and bacteremia were present in 49% and 20% of patients, respectively, and both were associated with higher blood lactate levels (p < 0.001). A longer ICU LOS was associated with significantly lower levels of lactate. Nonsurvivors had higher blood lactate levels than patients who survived. In our cohort, 934 patients (57%) had hyperlactatemia on admission. Patient characteristics were evaluated in the univariate analysis for association with hyperlactatemia (Table 1), and significant variables were included in the multivariate logistic regression analysis adjusted by either SOFA score or SAPS II. A SAPS II ≥ 45, SOFA score ≥ 7, Charlson Comorbidity Index score ≥ 4, infections with intra-abdominal origin and presence of septic shock remained significantly associated with hyperlactatemia. Of these, the presence of septic shock was the factor that was most related to hyperlactatemia, with ORs (95%CIs) of 2.63 (1.99 -3.45) and 2.72 (2.15 -3.45), when adjusted for SOFA score and SAPS II, respectively. Hyperlactatemia severity Patients presented with mild, moderate and severe hyperlactatemia in 57%, 34% and 9% of cases, respectively ( Table 2). Higher degrees of severity of hyperlactatemia showed higher values of SAPS II (p < 0.001), SOFA score (p < 0.001) and Charlson Comorbidity Index score (p = 0.001), higher incidence of chronic liver disease as a comorbidity (p = 0.018), presence of septic shock (p < 0.001) and bacteremia (p < 0.001) ( Table 2). On the other hand, the degree of hyperlactatemia was significantly inversely associated with chronic respiratory disease (p = 0.007). The hospital mortality rate showed a significant increase with the severity of hyperlactatemia, with values of 36%, 55% and 79% for mild, moderate and severe hyperlactatemia (p < 0.001), respectively. Intensive care unit and hospital LOS decreased with increased hyperlactatemia (p < 0.001). Figure 1S (Supplementary material) shows that this effect was more evident in the nonsurvivors. The difference in ICU and hospital LOS between survivors and nonsurvivors was statistically significant in all 3 categories of hyperlactatemia. Effect of hyperlactatemia severity on mortality The ROC curve for hospital mortality had an AUC of 0.64 (95%CI 0.61 -0.67) for blood lactate values compared with 0.75 (95%CI 0.72 -0.77) for SAPS II and 0.69 (95%CI 0.67 -0.72) for SOFA score ( Figure 2S -Supplementary material). In regard to hospital mortality, for a cutoff value of 2mmol/L for lactate levels, specificity was 51% and sensitivity was 69%, and for a cutoff value of 4mmol/L, specificity was 84% and sensitivity was 38%. Comparing the 95%CIs for the respective AUCs, we concluded that the combination of lactate with SOFA score did not significantly improve the performance of each variable alone, increasing the AUC to 0.71 (95%CI 0.68 -0.74) for hospital mortality. DISCUSSION In this large multicenter national study, clinical and epidemiological data from 14 Portuguese ICUs were collected, providing reliable and robust data that represent a picture of the country. We evaluated factors influencing hyperlactatemia on ICU admission of infected patients and its prognostic value. In some of the previous studies that used a single value of lactate on ICU admission, the first value measured on admission was selected; (18)(19)(20) others used the highest value in the first 24 hours. (21)(22)(23) In our investigation, the highest value in the first 12 hours of ICU admission was considered. We found that in a heterogeneous group of infected patients admitted to the ICU, hyperlactatemia was highly prevalent (57% (16) of patients). In this group of critically ill patients, nearly half presented with septic shock. In the literature, the incidence of hyperlactatemia on admission in patients with severe infection has been shown to vary between 52% and 76%. (18,(24)(25)(26) Medical literature has demonstrated variable results related to the influence of hyperlactatemia on ICU or hospital LOS. Van den Nouland et al. (27) found no differences in hospital LOS between patients admitted to the emergency department with lactate levels < 4mmol/L and ≥ 4mmol/L. In contrast, Chebl et al. (28) observed that hospital LOS was longer for patients presenting to the emergency department with lactate levels > 4 mmol/L in comparison to 2 -4mmol/L (10.4 ± 12.6 versus 8.1 ± 8.8 days), with a significantly higher mortality in the first group (40.7% versus 12%). In another study by Chebl et al. (23) with a total of 16,447 patients admitted to the ICU, patients with lactate levels between 2 -3.99mmol/L had a shorter hospital and ICU LOS than those with normal lactate levels; however, when restricted to survivors, differences in LOS were not statistically significant. Soliman et al. (19) studied the relationship between blood lactate and LOS in a mixed ICU. They concluded that survivor patients with hyperlactatemia had a longer LOS than patients with normal lactate levels; on the other hand, hyperlactatemic nonsurvivors had a shorter LOS than nonsurvivors in the normal lactate level group. In our study, ICU and hospital LOS both decreased within classes of hyperlactatemia; however, after considering only survivors, an increase in ICU LOS was observed. The shorter LOS associated with higher lactate levels seems to be driven mainly by a higher mortality, as illustrated by the finding that the effect is limited to nonsurvivors. These findings highlight the importance of interpreting the effect of hyperlactatemia on LOS based on survival due to a competing risk bias. The prognostic value of hyperlactatemia for mortality was first suggested by Broder et al. in 1964 (29) when they found that a lactate level > 4mmol/L in patients with shock from different causes was associated with death. Optimal cutoffs of single value lactate measurements in terms of prediction of outcome vary considerably in the literature, which in part may be justified by differences in the choice of lactate value for analysis, type of patients, severity on admission and outcome selection. Rivers et al. (30) selected a serum lactate level of ≥ 4mmol/L to identify patients with severe sepsis or septic shock in the emergency department. In one study published in 2007, including patients infected on admission, an initial lactate level ≥ 4mmol/L was associated with a risk of in-hospital death three times higher than patients with a lactate level < 4mmol/L; (31) in another study also published in 2007, including infected patients, a lactate level ≥ 4mmol/L showed an adjusted (for age and blood pressure) OR of 7.1 for 28-day inhospital death in comparison to patients with a lactate level < 2.5mmol/L. (32) Of note, this cutoff of the lactate level (4mmol/L) was incorporated in the second edition of the Surviving Sepsis Campaign (33) as an indicator of the need for fluid resuscitation in septic patients. In an analysis of the Surviving Sepsis Campaign database, (25) serum lactate values greater than 4mmol/L were significantly associated with in-hospital mortality. In the group that had lactate measured within 6 hours, only patients with both a lactate value greater than 4mmol/L and hypotension maintained a statistically significant association after risk adjustment. However, in the Sepsis-3 consensus, the lactate cutoff for septic shock identification was changed from 4 to 2mmol/L, in order to improve sensitivity. (34) In a recent retrospective study including 363 patients with sepsis and septic shock according to the Sepsis-3 definitions, a 6-hour lactate level of ≥ 3.5mmol/L was the optimal cutoff for 30day mortality. (13) These results reinforce our findings that in-hospital mortality was correlated with hyperlactatemia above 4mmol/L but not with mild hyperlactatemia. Mild hyperlactatemia and even values below the usual cutoff for hyperlactatemia of 2mmol/L have also been described as predictors of mortality. In a retrospective analysis using two septic shock cohorts, (35) patients with initial lactate values between 1.4 and 2.3mmol/L had significantly greater 28-day mortality than patients who had baseline lactate values ≤ 1.4 mmol/L in both cohorts; however, the hazard ratio was not different from that obtained with blood lactate values between 2.3 and 4.4mmol/L (1.78 versus 1.65). Only for lactate values ≥ 4.4mmol/L was a significantly higher hazard ratio of 3.52 obtained. Several studies have described an augmented risk of death related to an increase in blood lactate values. (18,22,23,30) Ferreruela et al. (36) observed an in-hospital mortality of 32.5% for critically ill patients in a mixed ICU for lactate concentrations during the ICU stay between 5 and 10mmol/L, which increased to 74.6% (p < 0.001) for hyperlactatemia > 10mmol/L. Haas et al. (9) found an ICU mortality approaching 80% for patients with lactate levels > 10mmol/L, on at least one occasion in a retrospective study with 14,040 ICU patients. In our study, moderate (4 -9.9mmol/L) hyperlactatemia was the lowest category predicting in-hospital mortality. Hospital mortality for lactate values > 10mmol/L reached 79%, which is in accordance with the previously mentioned studies. (9,36) In the literature, values described for AUC of ROC curves of admission lactate and for in-hospital and 28day mortality have varied between 0.63 and 0.70 in patients with suspected infection, (37) sepsis (18,26,38) and septic shock. (12,35) In our study, the AUC value for the ROC curve of lactate level for in-hospital mortality was in the range of previously described values (0.64). The AUC of the ROC curve of lactate was lower than that obtained with the scores for organ dysfunction, SAPS II and SOFA. The addition of lactate level to the SOFA score resulted in a marginally higher AUC of the ROC curve for hospital mortality (0.71). Our study has several limitations. First, it is a post hoc analysis and for that reason not designed for this purpose. In the original protocol, the value of lactate collected was the highest in the first 12 hours of admission without any definition for the time of first lactate measurement or time to repeat measurements. However, our study was prospective, with a large sample size, and was conducted over a complete year. In the literature, most of the studies are retrospective, and some establish even higher intervals (24 hours) for defining lactate level at admission. (7,22,23) Additionally, data on the possible pathophysiology or underlying mechanism of hyperlactatemia were not collected. There are multiple reasons for lactate elevation with different clinical relevance, and these confounders were not all categorized in this study; however, it was observed that in a general ICU population, hyperlactatemia is associated with mortality irrespective of underlying disease. Second, our study did not collect any data about lactate kinetics over time. However, lactate level and lactate clearance have both been shown to be useful targets in patients with suspected infection (11) or septic shock. (12) On the other hand, using a single value of lactate can be an easier and still valid method for predicting outcome. Third, septic shock patients were classified according to the Sepsis-2 definition (15) accepted at the time of data collection, which has now been replaced by the Sepsis-3 definition. (36) Although there was no direct correspondence between these two definitions, we maintained the septic shock subgroup in the analysis since it allowed us to identify the more seriously ill patients. Fourth, since this was a multicenter study, lactate measurements were not performed using similar equipment in the different institutions, which could induce some bias in the values obtained in different centers. However, this was a multicenter study with a large sample size allowing robust and statistically supported conclusions that can be generalized to other mixed ICUs. CONCLUSION Hyperlactatemia on intensive care unit admission was present in more than half of a heterogeneous group of infected patients admitted to the intensive care unit and was a strong predictor of mortality. Blood lactate levels correlate independently with in-hospital mortality for moderate and severe degrees of hyperlactatemia.
v3-fos-license
2019-04-27T13:12:16.329Z
2018-11-13T00:00:00.000
133948356
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://nhess.copernicus.org/preprints/nhess-2018-288/nhess-2018-288.pdf", "pdf_hash": "d4d73c9d0b367259630791e43c693d2b0359ff82", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1381", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "sha1": "71f6b52cfb5e61335a5be33461321f53a955b867", "year": 2018 }
pes2o/s2orc
Impact analysis of dynamical downscaling on the treatment of convection in a regional NWP model-COSMO: a case study during the passage of a very severe cyclonic storm “OCKHI” By: On behalf of myself and my co-authors, I would like to extend my sincere thanks to you and your supporting Editorial team for your efforts in evaluation of our manuscript. We would also like to place on records our sincere appreciation to Dr. Ronny Petrik and other anonymous reviewer for their valuable comments and suggestions, which have helped us in extending the scope of paper and improving the quality of scientific content of our manuscript. We have addressed almost all the suggestions/queries raised by both the reviewers and have made necessary modifications in the manuscript. Comments from Referee The study evaluates the representation of a cyclone over the Arabian Sea in COSMO model simulations at different horizontal resolutions and different treatments of convection. More specifically, the authors performed simulations at a grid spacing of (i) 0.0625°with a parameterized convection, (ii) 0.025°also with a convection scheme, and (iii) 0.025°with explicit convection. Precipitation and CAPE fields from COSMO are then compared to ERA-Interim reanalysis data in an attempt to evaluate which model configuration better represents the convection and precipitation during the passage of the cyclone over the Arabian Sea along the Indian Peninsula. Overall, I found the study potentially relevant and the manuscript carefully written. However, there are several major flaws in the model setup of the experiments, the meteorological evaluation, and the choice of the data set that serves for the model evaluation. Therefore, a substantial part of the analysis is invalid and the conclusions remain unsubstantiated. In my view, the required corrections go beyond major revisions. However, the study has much potential when these major comments are accommodated. Tropical cyclones cause frequently severe socioeconomic impacts, and their simulation and predictability are of high interest to the scientific community and operational weather forecasting. Also, the model experiments with different representations of convection are relevant and deserve attention. Therefore, I would like to encourage the authors to implement these changes and to resubmit the manuscript. Author's Response We would like to thank the reviewer for endorsing the potential relevance of our manuscript ("I found the study ...carefully written."), and also agree with the objections raised thereafter. During the revision of our manuscript, we have modified the numerical experiments with COSMO by replacing the DPC simulations with a new set of simulations, wherein the grid resolution of COSMO is kept at 0.0625°a nd convection parameterization scheme is switched off. Furthermore, for meteorological evaluation of our model simulations, we have now included fine-resolution ERA5 and NCEP FNL Reanalysis fields (available at a grid resolution of 0.25°). For validation of rainfall simulations, we have made use of satellite-based IMERG (Integrated Multi-satellitE Retrievals for Global precipitation measurement) observations. Furthermore, the model domain of COSMO is also extended to a larger area over the Arabian Sea which covers the entire track of OCKHI storm, and all the simulations are carried out for new domain. After incorporation of new datasets, with re-designed numerical experiments, the Results and Discussions as well as our Conclusions are substantially improved. Author's changes in the manuscript • Introduction: Scope of the manuscript is revised. • Data: Details of ERA5, NCEP FNL Reanalysis and IMERG observations are added. • Numerical Experiments in the COSMO Model: DPC Simulations are eliminated and are replaced with CNC simulations (0.0625°grid resolution, and convection scheme switched off). • Figures: Figure 5 and Figure 7 are eliminated. One new figure is included to show the sea level pressure and wind vectors from reanalysis fields and concurrent simulations from COSMO. Furthermore, one more figure is added for representation of vertical cross section of equivalent potential temperature along the latitudes. • As an outcome of the above-mentioned modifications in the manuscript, Results and Discussion, and Conclusions sections are also substantially revised. Author's Response We agree with the reviewer's suggestions about the "dynamical downscaling". In the revised manuscript, we have included meteorological fields of CAPE, sea level pressure and wind vectors from ICON global model. These fields are later compared with the COSMO simulations at dynamically downscaled finer grids. Furthermore, we have included satellite-based precipitation measurements for validation of rainfall simulations. We also agree with the reviewer about rephrasing of sentences in Section 4 dealing with the dynamical downscaling. In this section, we have explicitly mentioned that the present work deals with the sensitivity of model's grid resolution to the convection parameterization scheme. Author's changes in the manuscript • Data: Details about ICON global model and other reanalysis fields are included. • Results and Discussions: CAPE, sea level pressure and wind vectors extracted from global reanalysis fields are compared with COSMO model simulations. Similarly, rainfall simulations of COSMO are validated against the satellite-based IMERG observations. • As an outcome of the above-mentioned modifications in the manuscript, Results and Discussion, and Conclusions sections are also substantially revised. Observations for model validation and comparison The study uses the ERA-Interim data set from the ECMWF as a means to validate the COSMO model simulations. This approach is problematic since the ERA-Interim reanalysis is produced at a resolution of about 0.7°x 0.7°, much coarser than the COSMO model simulations which use a horizontal grid space of about 3-7 km. Later, in the analysis it becomes indeed clear that the data is much coarser (e.g., page 8, lines [26][27] and that the center of the cyclone is more off in the ERA-Interim data than in the COSMO simulations. As a consequence, the comparison between the COSMO simulation experiments and ERA-Interim as shown in Figures 5 and 7 is not relevant. Therefore, the conclusions based on this comparison, as for example phrased in the last sentence of the abstract, are not supported by a valid analysis. I highly recommend to use precipitation observations based on satellite estimates, for example TRMM (Huffmann et al., 2007) or any other satellite product, whereas CAPE fields from the operational IFS analysis from ECMWF may provide higher resolution data than the ERA-Interim reanalysis. Author's Response AGREED AND IMPLEMENTED. CAPE measurements from fine-resolution global reanalysis fields (ERA5 and NCEP FNL, both with 0.25°grid resolution) are used in the revised figures. We also accept to make use of precipitation observation based on satellite estimates. In this regard, we would like to mention that TRMM observations over the oceanic regions for the period of OCKHI storm are not available. Hence, as an alternative option, we have used satellite-based IMERG precipitation measurements, which are available at 0.10°grid resolution and are widely used for the precipitation studies. Author's changes in the manuscript • Furthermore, IMERG satellite-based precipitation measurements are used for depicting the observed 24 h accumulated rainfall. • Accordingly, the write-up describing the modified figures is also revised. Model domain The used model domain is very small with only 10 degrees / 1000 km in zonal and meridional direction. In fact, the cyclone is located near the boundary of the domain at the initial time of the simulations with a +48 and + 36 hour lead time, as shown in Figure 2. This model configuration is problematic for obtaining proper results. I recommend using a model domain that is sufficiently covers the tropical cyclone throughout the simulation. As also written in section 5.1.2 (page 11 and lines 30-32), it is understood that computational resources can be a limitation; however, this cannot justify a model simulation that does not support a valid study. Author's Response AGREED AND IMPLEMENTED. Author's changes in the manuscript • COSMO domain is enlarged over the Arabian Sea (6.0°N to 22.0°N; and 66.0°E to 82.0°E). Simulation experiments The study uses three different simulation experiments; (1) with a grid spacing of 0.0625°(∼ 7 km) and convection parameterized, (2) with a grid spacing of 0.025°(∼ 3 km) and convection parameterized, and (3) with a grid spacing of 0.025°(∼ 3 km) and without convection scheme. Following previous studies, convection schemes can potentially be switched of at the order of a 7 km grid spacing, whereas convection may largely be resolved when using a grid spacing of 3 km (e.g., Marsham et al., 2013). This is also explicitly stated in the manuscript on page 7, lines 17-18. The results show that experiment (2) does not add much to experiment (1), whereas the convection-permitting simulation of experiment (3) shows a lot of details as compared to experiment (2). Therefore, I would recommend to replace experiment (2) by a simulation with a grid space of 0.0625°(∼ 7 km) and without the use of a convection scheme. • At several places (e.g., lines 2, 5 and 8 on page 3), the text speaks about a cyclone. Is there a specific reason why not to speak about a tropical cyclone? The term "cyclone" is a very general term that also covers extratropical cyclones that are found in the extratropics. AGREED AND REPLACED. Wherever relevant, we have replaced the word "cyclone" with "tropical cyclone" in respective sections. • Page 1, lines 19-20. Instead of speaking about "the smallest and most compact weather processes", please, speak in terms of spatial and time scales. AGREED AND CORRECTED. Necessary modifications are done in this sentence by incorporating the spatial and time-scales of convective processes. • AGREED AND IMPLEMENTED. Above sentences are rephrased and corrected. • Section 2. Please, specify which COSMO version is used. AGREED AND INCLUDED. Version 5.05 of COSMO is used for simulations in the present study. • Page 4, lines 12-13 The phrase "Since the convective processes ... ... much smaller than those resolved by mesoscale and regional NWP models" is not entirely correct. In case of high-resolution simulations, for example, with a horizontal grid spacing of 3 km, convective processes may be largely resolved by the model. AGREED AND CORRECTED. Above sentences are rephrased and corrected. • AGREED AND IMPLEMENTED. The OCKHI cyclonic storm was categorized as a Very Severe Cyclonic Storm (VSCS), which was formed in the month of December. Historically speaking, none of the Depressions or Cyclonic Storms formed over the Comorin Sea in the month of December ever became a VSCS in the last 100 years or so. Secondly, this storm attained the status of a Cyclonic Storm from the stage of Depression within 6 h. Its rapid intensification was yet another rare event which was extremely unusual. These details are included in the revised manuscript. • Page 8, lines 12-13. Please, state the source of these precipitation observations. AGREED AND INCLUDED. These observations were cited from the IMD report on OCKHI. Appropriate citations are included in the revised manuscript. • I recommend to restructure sections 5.1.1 as 5.2 and 5.1.2 as 5.3. AGREED AND RE-NUMBERED. Section numbering is corrected accordingly. • Page 9, line 4. It is not only the state of the lower troposphere that defines CAPE. Please, replace "lower atmosphere" by "the thermodynamic conditions". AGREED AND REPLACED. AGREED AND INCLUDED. CAPE fields are corresponding to 00 UTC of 03 December 2017. • Page 9, lines 13-14 "The ECMWF fields were almost off by more than 100 kms from ..." and page 10, lines 12-13 " ... the CAPE magnitudes obtained from ECWMF fields were always overestimated ..." and page 10, lines 24-29 "In this regard ... ... a smaller mesoscale region only". This shows that the ERA-Interim data is not suitable for validation of the model simulations, see also major comment number 2. AGREED. We have included fine-resolution global data of ERA5 and NCEP FNL reanalysis with ERA-Interim data. • Page 10, line 32. Are these precipitation amounts per day? AGREED AND INCLUDED. Yes, these precipitation amounts are for 24 h between 00 UTC of 02 December 2017 to 00 UTC of 03 December 2017. These details are included in the revised manuscript. AGREED AND REPHRASED. This sentence has been rephrased in the revised manuscript. • The conclusions at page 11, lines 19-20 "However, switching off ... ... rainfall over the Arabian Sea." and at page 12, lines 12-13 "Fine representation ... ... accumulated rainfall magnitudes." are invalid due to the comparison of COSMO simulations to ERA-Interim data. Satellite-based estimates may provide a base for a more realistic and useful comparison, see also major comment number 2. Moreover, Figure 6 shows that the DNC simulation has intense rainfall, although area-averaged amounts as in Figure 7 may be lower as compared to ERA-Interim. AGREED AND CORRECTED. We have included satellite-based IMERG precipitation measurements for validation of rainfall simulations. Thus, these points are well-addressed in the revised manuscript. • Page 12, lines 30-31. The sentence "There is a visible increase ... ... over the tropical oceans" needs to be supported by references or otherwise be removed. AGREED AND REMOVED. The above sentence is removed. Comments from Referee (Writing Comments:) Below we present a summary on all the "Writing Comments" raised by the reviewer (Italic Letters), and our response/changes in manuscript just beneath the reviewer's comments. Overall, we have taken care of all these comments in the revised version of manuscript. AGREED AND REPLACED. • Page 1, line 15. Please, replace "an NWP" by "a NWP" and write out NWP. Abbreviations used in the abstract need again to be defined within the text of the manuscript upon first use. AGREED AND CORRECTED. • Page 1, line 20 Page 2, line 1. Please, replace "surface to the troposphere" by "surface to the upper troposphere" AGREED AND REPLACED. AGREED AND REPLACED. • Page 2, lines 9-10. Please, rephrase the sentence "However, the process of convection ... interaction with radiation", for example, as "Moreover, convection involves complex interactions with cloud formation which influence the atmospheric circulation through radiative effects." or in a similar direction. AGREED AND REPHRASED. AGREED AND REPLACED. AGREED AND REPLACED. • Page 2, line 16. Please, replace "is apparently inter-linked with" by "constrained by" or in a similar direction. AGREED AND REPLACED. AGREED AND REPLACED. AGREED AND REMOVED. • AGREED AND CORRECTED. • Page 4, line 9. Please, replace "the conserved framework" by "the conservation of" and in line 10, replace "Different schemes" by "Schemes". AGREED AND REPLACED. AGREED AND REPLACED. AGREED AND REPLACED. AGREED AND REPLACED. AGREED AND REPLACED. AGREED AND REPLACED. NO CHANGES ARE MADE. We could not find the above mistake in original manuscript. AGREED AND REPLACED. NO CHANGES ARE MADE. The above sentence is correct to the best of our knowledge, and "form" is correctly used, hence no changes are made. AGREED AND REMOVED. • Page 6, line 27 as well as on page 7, line 8. Please, replace "under the framework of" by "using the" or "with the". AGREED AND REPLACED. AGREED AND IMPLEMENTED THROUGHOUT THE MANUSCRIPT. • Page 7, lines 2-3. Please, remove "to the COSMO model" and write "of the actual episode". AGREED AND IMPLEMENTED. AGREED AND REMOVED. • Page 7, lines 15-17. Please, rewrite this long sentence. Stating that you switched off the convection scheme or use a convection-permitting simulation is sufficient. AGREED AND RE-WRITTEN. • Page 7, line 14. Please, rephrase "are treated directly" by "are explicitly simulated", or "permitted" or in that direction. AGREED AND REPHRASED. AGREED AND REPLACED. AGREED AND REPLACED. • Page 7, line 30 and at many other places in the manuscript. Rewrite "01st December" and "5 th December" as "1 December" and "5 December". AGREED AND IMPLEMENTED THROUGHOUT THE MANUSCRIPT. AGREED AND REPLACED. AGREED AND CORRECTED. Sentence is rephrased and is rewritten as "landfall and final dissipation". AGREED AND REPLACED. AGREED AND REPLACED. • Page 9, line 10. Please, remove ", and the category of storm was retained as VSCS.". AGREED AND REMOVED. AGREED AND REPLACED. • Page 9, line 20. Please, replace "which was actually not true for" by "occurred on". AGREED AND REPLACED. AGREED AND CORRECTED. AGREED AND REPLACED. • Page 10, line 15. Define the abbreviation "CS" or write full out. AGREED AND CORRECTED. All abbreviations are once again carefully checked. AGREED AND REMOVED. AGREED AND REPLACED. AGREED AND REPLACED. Comments from Referee (Figures and Tables) • Figure 1. Please, remove the words downscaling in the caption as all simulations are downscaled in the sense that the simulations are fed by global data. Instead, indicate the resolution of horizontal grid spacing of 7 and 3 km. NECESSARY CORRECTIONS ARE DONE IN THE FIGURE CAPTION. • Figure 2. The caption speaks about CAPE from COSMO. CAPE fields extend outside the model domain that is indicated by the black box when I understand correctly. Is this perhaps CAPE from ERA-Interim? AS THE MODEL DOMAIN IS EXPANDED, THIS FIGURE ITSELF IS REVISED. • Figure 6. Is this the accumulated rainfall in the 24 hours prior or after 00 UTC, 3 December 2017? Please, clarify. FIGURE CAPTION IS CORRECTED AND ABOVE DETAILS ARE INCLUDED. • Table A2. For the sake of consistency, I would recommend to also include the results from the CPC simulation. INCLUDED IN THE TABLE.
v3-fos-license
2020-01-02T21:46:06.573Z
2019-12-17T00:00:00.000
212712420
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ect-journal.kz/index.php/ectj/article/download/885/714", "pdf_hash": "fd335568198cd95decdbe18bb950b071e5efecc9", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1383", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "af5f3815d49bb9f2ac5e12b7151187026e7ae62c", "year": 2019 }
pes2o/s2orc
Investigation of Gold Electrosorption onto Gold and Carbon Electrodes using an Electrochemical Quartz Crystal Microbalance The adsorption behavior of Au3+ ions on metal electrodes has been studied using an electrochemical quartz crystal microbalance combined with the cyclic voltammetry technique. The experiments were carried out for HAuCl4 using 0.1 mol·L-1 HCl (pH~1) as a background electrolyte solution. The kinetics of electroreduction of Au3+ ions on the rice husk based activated carbon and gold electrodes in chloride electrolytes by the cyclic voltammetry and the electrochemical quartz crystal microbalance with a variation of the scan rate in the range of 5‒50 mV·s-1 has been studied. The diffusion coefficient of Au3+ ions for the tested solution on gold and carbon electrodes was determined by the cyclic voltammetry method on the basis of the Randles-Ševčik equation. It is found that electroreduction of gold goes via the discharge of complexes to the formation of metallic gold with a current efficiency of 97‒99%. The scanning electron microscopic images of the gold adsorbed carbon surface was taken to see gold particles and their morphology. In SEM images, it is clearly seen that the surface of carbon has a relief structure and gold has grown in the form of clusters. The smallest gold nanoparticles that could be examined were 100‒250 nm in diameter on the surface of the carbon electrode. − Introduction The electrochemical quartz crystal microbalance (EQCM) is a modern powerful method used in electrochemical experiments. EQCM monitors the change of frequency simultaneously with the electrochemical signal. The change in frequency is associated with changes in mass due to deposition or adsorption of a substance or dissolution of a substance from a working electrode [1]. The EQCM has been used simultaneously with quasi-steady state techniques like cyclic voltammetry (CV). Mass changes during electrolysis can be determined from A/(AmM) vs. potential curves, while A/(AmM) vs. charge density curves allow evaluating the number of Faraday exchanged per mole of electro-active species using Faraday's law of electrolysis. The change in the frequency of oscillation (∆f) is sensitive to the change in mass de-where ∆f is the change in frequency (Hz), C f is the sensitivity factor of the crystal (0.0815 Hz·ng -1 ·cm -2 for a 6 MHz at 20 °C) and ∆m is the change in mass per unit area (g·cm -2 ). C f is provided by Eq. (2), shown above, in which n is the number of harmonic at which the crystal is driven (this factor is set to 1, by design), f is the resonant frequency of the fundamental mode of the loaded crystal (Hz), ρ q is the density of quartz (2.648 g·cm -3 ) and μ q is the shear modulus of quartz (2.947·10 11 g·cm -1 ·s -2 ). (2) From Eqs. (1) and (2), the change in mass can be calculated as follows (Eq. 3): In [4], the authors compared the response of EQCM during the very early stages of the deposition of silver and copper on a gold substrate. Formation of copper ions as soluble intermediates during the deposition of copper causes a deviation of the frequency response from the expected one theoretically. Significant stress, which is observed only in the case of copper, is attributed to a large difference in the lattice parameter between gold and copper. During silver deposition, the frequency response follows the Sauerbrey equation, and no stress is observed. The authors of [5][6] applied EQCM method when investigating copper electrodeposition. Anodic oxidation of copper electrodes in alkaline solutions was investigated using EQCM, CV, chronoamperometry (CA) and electrochemical impedance spectroscopy (EIS) measurements [5]. The participation of Cu 3+ soluble species in the electrocatalytic oxidation of ethanol was proved by EQCM measurements, these data providing valuable information on the mechanism of the electrode process and formation of Cu 2+ insoluble species from the reaction of Cu 3+ with ethanol. Also, the results on the copper electrodeposition mechanism at different pH values were obtained using EQCM [6]. Direct reduction of Cu 2+ and copper oxide (CuO) reduction at pH 2.0 and 4.5 occur simultaneously. Activated carbon is a carbon-containing adsorbent having a large surface area and a developed porous structure. Activated carbon can be obtained from carbon-containing raw materials (rice husk, apricot stones, walnut shells, etc.) by physical and chemical activation methods [7][8][9][10]. According to the literature review, there are many works on using carbon adsorbents for adsorption of metals (Au 3+ , Cr 6+ , As 3+ and others) and metal compounds [11][12][13][14][15][16][17]. However, only a few publications on the use of EQCM method for the study of Au 3+ ions on the activated carbon adsorbents can be found in the literature. In this study, we focused on the electrochemical quartz crystal microbalance method. This article reports on a study of Au 3+ ions adsorption on activated carbon and gold electrodes for comparison by a combined electrochemical quartz crystal microbalance-cyclic voltammetry (EQCM-CV) method. The frequency change during the adsorption of Au 3+ ions was determined by EQCM simultaneously measuring the electrical charge in the electrochemical experiments. Experimental The resonant frequency of quartz crystal and electrochemical experiments were monitored by Autolab Potentiostat/Galvanostat Model AUT83945 (PGSTAT302N). The detection of chloroauric solution (HAuCl 4 ) was carried out by EQCM-CV analysis. The EQCM-CV studies were conducted in a three electrode cell with quartz plate coated with activated carbon and gold electrode the active surface of which was 0.361 cm 2 , gold wire (Au) as a counter electrode, and saturated Ag/AgCl in 3 mol·L -1 KCl as a reference electrode. A working solution with the concentration of 100 mg·L -1 was prepared by diluting the contents of the ampoules of the state standard sample of Au 3+ ions (company «IRGIREDMET», Russia) with distilled water. The basic background electrolyte was a solution of 0.1 mol·L -1 hydrochloric acid. The working electrode was made by coating a quartz plate with activated carbon from rice husk. Carbon coating consists of 85 wt.% activated carbon, 10 wt.% polyvinylidene fluoride (PVDF) from Sigma-Aldrich and 5 wt.% carbon black (C-65, Timcal C-NERGY Imerys). A detailed description of the procedure for producing activated carbon is given in [18]. The morphology of activated carbon after sorption of gold ions was determined by scanning electron microscopy (SEM, Quanta 3D 200i Dual System, FEI). The surface area of activated carbon was investigated on the analyzer «Sorbtometr M» by low-temperature nitrogen adsorption using the method of Brunauer-Emmett-Teller (BET-method). As shown previously [18], the obtained carbon material from rice husk has a specific surface area of 2900 m 2 ·g -1 . Electroreduction of Gold on a Gold Electrode The carbonized and activated rice husk (CARH) has a rather low redox potential and the stationary potential is 0.05 V (Ag/AgCl). The measured stationary (real) potential of [ ] in a hydrochloric acid medium is equal to 0.47 V (Ag/AgCl). The potential difference between gold (oxidizing agent) and the sorbent (reducing agent) is 0.42 V relative to the reference. Figure 1a shows the cyclic voltammetry measurements and the frequency variation using the EQCM-CV procedure. These curves correspond to the deposition of Au 3+ ions on a gold-coated quartz crystal electrode, in 100 mg·L -1 HAuCl 4 during a potential scan between 0 and 0.95 V at a scan rate of 5 mV·s -1 . Scanning starts from + 0.8 V to the cathode region. All frequency changes were measured with respect to the zero ∆Frequency which was set using the Reset EQCM ∆Frequency command while the working electrode was kept at 0.95 V vs Ag/AgCl (3 M KCl). As the potential is scanned in the positive direction, the mass deposition of Au 3+ ions onto goldelectrode starts at around +0.65 V. This is followed by a sharp increase of cathodic current at around +0.5 V, which is detected by a sharp decrease in EQCM ΔFrequency. The frequency continues to decrease after the lower vertex potential is reached until the current passes through 0 and becomes positive again. This triggers an increase of the frequency and the ∆Frequency value finally returns to roughly 0 Hz at the positive end of the scan as the deposited gold is removed from the surface. Figure 1b illustrates the changes in the mass of the electrode during the discharge-ionization process of a gold electrode on a piezoelectric element within the cell filled with a gold-containing acidic solution. The figure shows the minima of the change in the frequency of the oscillation of the piezoelectric quartz. If we take the absolute values then the maxima are seen. The change in the value of the oscillation frequency is given with a negative sign because during the electrodeposition of gold, an increase in the mass of the element occurs, thus leading to a decrease in the oscillation frequency. Prior to the start of voltammetric measurements, a calibration was carried out to take into account the influence of the solution mass on the change in frequency of the oscillation of the piezoelectric quartz, while the point zero (start of measurement) corresponds to the open circuit potential E ocp . A further decrease in the oscillation frequency from a potential of -0.8 to 0 V can be divided into two sections. The first section to -0.4 V, which is nonlinear, corresponds to the formation of the effective thickness of the diffusion layer and is characterized by a peak on the curve of the voltammogram. The second section has a linear shape during which the electroreduction of gold at a limiting diffusion current is observed at -0.4 ... 0 V ... -0.6 V (the diffusion layer has an effective thickness). The observed minima of the oscillation frequency of the piezoelectric crystal characterize the mass of electrodeposited gold, in particular, at a potential scan rate of 5 mV•s -1 , the minimum oscillation frequency is 645.82 Hz. By integrating the I -t curve of the measured voltammograms, the amount of electricity (Q) spent on the reduction of gold was calculated and presented in Table 1. From the value of ∆f, the practical mass (m pr ) of gold after its electrodeposition was calculated according to Eq. 1. Since the practical mass was known, then according to Faraday's law it was possible to calculate the number of electrons participating in the reaction. As shown in Table 1, the calculated values of the number of electrons are equal to three; this allows us to represent the electroreduction of gold according to the following reaction: Based on the known number of electrons, the current efficiencies (CE) of the gold electrodeposition process were calculated and presented below. When the potential stepped from 0.95 V (a value where no Au is deposited on the gold electrode surface) to +0.7 V, the average change in frequency was measured as being 645.82 Hz. Using Sauerbrey's Eq. (1), the change in frequency can be correlated to the change in mass. Comparing the theoretical value ( ) with the experimental one ( ), a very good agreement can be seen ( Table 1). The recorded cyclic voltammetry for gold electrodes at different scanning rates from 5 mV·s -1 to 50 mV·s -1 is shown in Fig. 2. The inserted plot shows the dependence of the peak of the cathodic current on the square root of the scan rate. It is seen in the figure that an increase in the cathode current (j pc ) coincides with an increase in the potential scan rate (v). This dependence is described by the Randles-Ševčik equation, while the linear dependence j pc vs. v 1/2 (Fig. 2) indicates the diffusion limitation of this process. From the slope of the linear dependence, the diffusion coefficient of ions in the solution was calculated to be equal to 1.6•10 -5 cm 2 ·s -1 . Electroreduction of gold on a carbon electrode Cyclic voltammetry was performed on carbon-capped electrodes, and as can be seen in Fig. 3 (black curve) this resulted in an oxidative peak (0.80 V) in the reverse scan and a reductive peak (0.55 V) in the forward scan. In the background electrolyte 0.1 M HCl, a carbon electrode was also examined in this potential region (Fig. 3, red curve). However, no clear redox processes were observed. Since CARH has a large surface area, large charge-discharge currents of the double electric layer (non-Faraday currents) were revealed on the voltammogram. Thereby in order to calculate the kinetic data on the gold electroreduction reaction on this material, compensation should be made for a non-Faraday current. For this purpose, the currents of a double electric layer (Fig. 3, red curve) were taken from the value of the cathodic current peak (Fig. 3, black curve). Finally, the resulting peak current valuesj pc were used to calculate the diffusion coefficient. Figure 4a illustrates cyclic voltammograms of the background electrolyte, which were used subsequently to compensate for non-Faraday currents. Figure 4b demonstrates cyclic voltammograms of a gold-containing electrolyte and visible cathodic current peaks. However, at relatively high potential scan rates (20-50 mV·s -1 ), cathode current peaks are not observed, thus indicating a high surface area of the carbon electrode that does not have time to be fully charged during short measurement times. In this regard, to detect current peaks (j pc ), low potential scan rates of 1 mV·s -1 to 10 mV·s -1 were selected. As can be seen in Fig. 5, at low electrode polarization rates, cathodic current peaks are clearly visible. The presence of cathodic current peaks will make it possible to determine the kinetic parameters of the reaction in case if the compensation of charge-discharge currents of the electric double-layer is correct. The linear dependence j pc vs. v 1/2 (Fig. 5 Inset) was also determined from the voltammograms of the carbon electrode and the diffusion coefficient of Au 3+ ions was equal to 56.0 cm 2 ·s -1 ·g -1 . The increased value of the diffusion coefficient of gold ions during the electroreduction of -ions on the carbon electrode is four orders of magnitude higher than that on the gold electrode. The overestimated value of the diffusion coefficient in this case (in aqueous electrolytes, the diffusion coefficient of ions varies in the region of 10 -3 cm 2 s -1 ÷ 10 -7 cm 2 ·s -1 ) is explained by the high specific surface of the activated electrode and the calculated apparent diffusion coefficients need to be adjusted for the mass of the carbon material. In addition, the voltammogram (Fig. 5) shows that the potential difference (∆Е p ) of the anode (Е ра ) and the cathode peak (Е рс ) is approximately 60 mV less than that of the gold electrode and the cathode currents are ~ 50 times higher, respectively. This indicates the catalytic effect of this electrode, which has a reducing property and has a more negative stationary potential (system consisting of H + | activated carbon) than the system consisting of | Au, which is given above. The scanning electron microscopic images of the gold adsorbed carbon surface was carried out to see the existence of the gold particles and their morphology. SEM images of the carbon electrode surface after deposition are represented in Fig. 6. In the SEM images, it is clearly seen that the carbon surface is irregular in nature and gold was grown not as a uniform thin film, but as a spherical and prolate (elongated). The smallest gold nanoparticles that could be examined were 100-250 nm in diameter on the surface of the carbon electrode. Submicron gold particles mostly were not grown separately. They are agglomerated and cannot be seen as separate particles in the microimages (Fig. 6a,b). It is still an open question of why they are agglomerated. Elemental analysis by Energy Dispersive X-ray Analysis was carried out to verify the fact that those bright particles are gold (Fig. 6c). Using this method, it is possible to obtain gold nano-and submicron particles and gold can be extracted (recovered) from production waste. Conclusions Within the framework of this work, the electroreduction of Au 3+ ions on activated carbon and gold electrodes was investigated. Using the piezoquartz microbalance method in combination with voltammetry, the number of electrons participating in the reaction were determined. It was also found that electroreduction of gold goes via the discharge of complexes to the formation of metallic gold with a current efficiency of 97-99%. Cyclic voltammograms of both electrodes revealed a linear dependence of j pc vs. v 1/2 at the studied scan rates of 1-50 mV·s -1 , which indicates the diffusion limitation of the electrochemical reduction of gold. Based on the Rends-Ševčik equation, the diffusion coefficient of Au 3+ ions was calculated. Diffusion coefficient of Au 3+ ions for the concentration of 100 mg·L -1 on gold and carbon electrodes is determined by the CV method and the values of the coefficient are 1.6·10 -5 cm 2 ·s -1 and 56.0 cm 2 ·s -1 ·g -1 , respectively. It was also revealed that electroreduction of gold on an activated carbon electrode comes with a high limiting cathode current compared to a gold electrode which is caused by a high specific surface area of the material. When i = 0, the system measured at a constant open circuit potential of about +420 mV vs. Ag/AgCl.
v3-fos-license
2021-05-11T00:03:21.220Z
2021-01-21T00:00:00.000
234138846
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-142283/latest.pdf", "pdf_hash": "4e002b6cb10d1f6cc384a20ed8cc2f0d7be011ad", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1386", "s2fieldsofstudy": [ "Medicine" ], "sha1": "bd910ee7181a2b178ca3d796e83c9b4f5f9448a1", "year": 2021 }
pes2o/s2orc
Death Risk Analysis for Patients With Severe COVID-19 Pneumonia Background: Coronavirus Disease 2019 (COVID-19) is currently a global pandemic. Information about the death predicting of severe COVID-19 is not clear. Methods: 151 in-patients from January 23th to March 8th 2020 were divided into severe and critically severe group, as well as survival and death group. The analysis of differences of clinical and imaging data were performed between groups. The logistic regression analysis of factors associated with death in COVID-19 were conducted, and the prediction model of death risk was developed. Results: Many clinical and imaging indices were signicantly different between groups, including the age, the epidemic history, the past medical history, the duration of symptoms prior to admission, blood routine, inammatory related factors, Na + , myocardial zymogram, liver and renal function, coagulation function, fraction of inspired oxygen and complications. The proportion of patients in imaging stage III and comprehensive CT scores was increased signicantly in death group. The area under receiver operating characteristic curve of the prediction model was 0.9593. Conclusions: The clinical and imaging data reect the severity of COVID-19 pneumonia. The prediction model of death risk might be a promising method to help clinicians to quickly identify and screen potential individuals who had a high-risk of death. interleukin-8 (IL-8), interleukin-10 (IL-10), Na + , troponin, LDH, aspartate aminotransferase, serum nitrogen, prothrombin time (PT), activated partial thromboplastin (APTT), (INR), , pathogenic agent of which is severe acute respiratory syndrome corona virus 2 (SARS-COV-2), is currently a global pandemic. SARS-COV-2 is a novel betacoronavirus belonging to the sarbecovirus subgenus of Coronaviridae family, which is closely related to severe acute respiratory syndrome coronavirus (SARS-CoV) and middle east respiratory syndrome coronavirus (MERS-CoV). It can lead to respiratory symptoms or severe pneumonia symptoms [1]. According to the estimation of the World Health Organization, 14% patients with SARS-COV-2 infection are severe type, requiring hospitalization and 5% are critical severe, requiring intensive care [2,3]. The mortality rate of SARS-COV-2 infected patients could be as high as 4% [2], which is much greater than that of seasonal in uenza. A study on the epidemiological characteristics of 72314 cases in China pointed out that SARS-COV-2 was highly infectious, but most patients were with mild clinical performances [4]. The death cases were often more than 60 years old and suffering from some basic diseases such as hypertension, cardiovascular disease and diabetes. Furthermore, a few severe patients rapidly developed to acute respiratory distress syndrome (ARDS) and died from multiple organs failure [5]. The latest biopsy samples from autopsy of a patient with severe illness demonstrated diffuse alveolar damage [6]. Additionally, the inconsistence existed in clinical and imaging performances of patients with COVID pneumonia and diversity imaging features might exist in a certain clinical stage of the disease [7][8][9]. A few studies [10][11][12][13][14] summarized the comprehensive clinical, laboratory and / or imaging ndings of severe and critically severe patients, which is of great importance for clinicians to adjust the treatment plan and afford clues to predict the death. Therefore, clinical and imaging evidence of severe and clinical severe COVID-19 patients need to be further explored. And it is also urgent to explore the risk factor of death for the severe and critically severe patients, in the international environment of many countries still in, or entering, the pandemic. The purpose of this study was to conclude the clinical and imaging characteristics and to develop a model for predicting the risk of death in patients with severe or critically severe COVID-19 pneumonia. Patients enrollment This was a multicenter, retrospective clinical study that was performed at 6 hospitals in Jiangsu and 1 hospital in Wuhan, China. 151 in-patients (104 severe and 47 critical severe) with COVID-19 pneumonia were included from January 23th to March 8th 2020. All the cases were con rmed by reverse transcription-polymerase chain reaction (RT-PCR), and conformed with following diagnosis criteria: Severe type, ful ll any one of the following conditions 1) respiratory distress, respiratory rate (RR) ≥ 30 times per minute, 2) resting state oxygen saturation (SaO2) ≤ 93%, or 3) oxygenation index (calculated by partial pressure of oxygen /fraction of inspired oxygen (FiO2)) ≤ 300mmHg (1mmHg = 0.133kPa); Critically severe type, ful ll any one of the following conditions 1) respiratory failure and mechanical ventilation needed, 2) shock, 3) concomitant failure of other organs. There were respectively 104 patients diagnosed as severe type and 47 as critical severe type COVID-19 pneumonia. In addition, 114 patients were divided into survival group and 37 into death group according to their clinical outcome. This multicenter research was approved by the institutional review board at each study center, and informed consent was obtained from the patients or their surrogates. Clinical Data Epidemic history, past medical history, symptoms and signs, as well as age and gender, were recorded. The detailed outcomes of initial laboratory examinations during the severe course were also recorded, containing blood routine, infection related factors, serum ion concentration, myocardial zymogram, liver and kidney function test, coagulation function test, RR, blood gas analysis and complications. Diagnostic criteria of cardiac injury: the serum troponin is the most important index, and its con dence interval greater than 95% of the normal value indicates myocardial damage, and the increase of other indexes can also indicate myocardial damage, in the order of importance: creatine kinase isoenzyme, creatine kinase, lactic dehydrogenase (LDH) [15]. Diagnostic criteria for renal injury: estimated glomerular ltration rate (eGFR) was calculated based on serum creatinine, and it was de ned as impaired renal function when eGFR 60 ml/min [16]. The score of the past medical history was determined by additions of these items if any (3 for malignant tumor, 2 for benign tumor, renal or liver malfunction, 1 for Chronic obstructive pulmonary disease, hypertension, diabetes or others). Imaging Data At the beginning of severe course, the initial imaging (138 patients underwent chest CT and 13 underwent chest radiograph) was analyzed, among which 76 patients were with follow-up CT examination and 31 patients were with follow-up chest radiograph examination. The scanning parameters for CT were as following: tube voltage 120kV, tube current 110mA, pitch 1.0, rotation time ranging from 0.5s to 0.75s, slice thickness 5mm, with 1mm or 1.5mm section thickness for axial, coronal and sagittal reconstructions. The parameters for chest radiograph were as following: the at panel detector was attached to the patients' chest, and the voltage and current were 120kV and 200mA, respectively. The chest imaging of 151 patients were analyzed by two experienced attending radiologists, who were blinded to the clinical information, and separately evaluated the imaging and recorded the severity. The chest CT imaging and chest radiograph were classi ed into mild (stage I), progressed (stage II) and severe stage (stage III), according to the scope of lung eld involved, with mild stage less than 25%, progressed stage 26-50% and severe stage more than 50%. The CT score of ground-glass opacity (GGO), consolidation, and the comprehensive score of in ammatory pulmonary in ltration were analyzed quantitatively using a radiologic scoring system ranging from 0-25 points, which was an adaptation of the method previously used to describe idiopathic pulmonary brosis and SARS [17]. Each lung lobe was evaluated by 0-5 points, on the basis of the area involved, with score 0 for normal performance, 1 for less than 5% of lung lobe areas involved, 2 for 6-25%, 3 for 26-50%, 4 for 51-75%, and 5 for more than 75%. A total score was eventually calculated via the addition of the score of each lobe. Statistical analysis Mann-Whitney U test and two-sample T test were used respectively for non-normal distributed and normal distributed data to compare the continuous variables and Pearson Chi-square test was used to compare the categorical variables, between severe and critically severe group, and between survival and death group, by statistical analysis system software (SAS ver. 9.4, SAS Institute Inc., Cary, NC). Then, univariate and multivariable logistic regression analysis were conducted, and the prediction model for mortality in patients with severe COVID-19 pneumonia was developed. Finally, the model was tested by the receiver operating characteristic (ROC) curve. A P value less than 0.05 was considered statistically signi cant. The mean value of the continuous variables in normal distribution was recorded as Mean (SD) and the mean value of non-normal distributed data was recorded as Median (IQR). The categorical variables were recorded as count and percentage. Clinical Features The clinical data of 151 patients with severe COVID-19 pneumonia and results of group comparison were shown in Table 1 history, past medical history, score of past medical history and the duration of symptoms prior to admission were different between survival and death group. The count of white blood cell and neutrophil, C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), procalcitoni, interleukin-6 (IL-6), interleukin-8 (IL-8), interleukin-10 (IL-10), Na + , myoglobin, troponin, LDH, aspartate aminotransferase, serum urea nitrogen, prothrombin time (PT), activated partial thromboplastin time (APTT), international normalized ratio (INR), FiO2, the occurrence rate of ARDS, septic shock, disseminated intravascular coagulation (DIC) and acute kidney injury (AKI) were lower, while the count of lymphocytes and albumin were higher in severe and survival group than those in critically severe and death group (P < 0.05). The percentage of patients with dyspnea, total bilirubin, brinogen and RR were lower, while the SaO2 was higher in the severe group than those in critically severe group (P < 0.05). The serum creatinine, the occurrence rate of cardiac injury and liver injury were lower, while the proportion of patients with moderate and high fever and estimated glomerular ltration rate were higher in survival group than those in the death group (P < 0.05). Imaging ndings As showed in Table 2, among the 151 severe and critically severe patients, 6 (3.97%) patients were diagnosed as stage I (Fig. 1), 68 (45.03%) were stage II (Fig. 2), and 77 (50.99%) were stage III (Fig. 3) on chest CT or chest radiograph images. On CT images, 116 (84.06%) patients were with the whole lung involved. The lesions of 83 (60.14%) patients on chest CT were mainly peripherally distributed. The proportion of patients with stage III in the death group was signi cantly higher than that in the survival group (73.0% vs. 43.9%, P < 0.05). There was signi cant difference in comprehensive score not only between severe and critically ill group, but also between survival and death group (P < 0.05). Logistic regression analysis and prediction model The univariate logistic regression analysis of factors associated with death in COVID-19 was shown in Table 3. The value of odds ratio estimates in patients with DIC was the highest (59.105), followed by septic shock (37.500) and myocardial injury (34.500). Multivariate logistic regression analysis of factors associated with death in COVID-19 were shown in Table 3. The death prediction model of risk factors for a severe patient was written as: WBC: white blood cell; PT: prothrombin time; APTT: activated partial thromboplastin time; CRP: Creactive protein; ARDS: acute respiratory distress syndrome; DIC: disseminated intravascular coagulation; AKI: acute kidney injury. The percent concordant of the prediction model was 96.1%. The ROC curve of the prediction model was shown in Fig. 4 and the area under curve of the ROC curve was 0.9593. Discussion COVID-19 is a novel infectious disease, characterized by high transmissibility and serious harmfulness. A few patients with severe course of disease tend to have severe clinical symptoms, who may rapidly progress into ARDS and need the aids of intensive care unit [18]. Hence, it is essential to closely monitor the condition of patients, by dynamically monitoring the alteration of symptoms and laboratory examinations, the change of the chest imaging performances, which are helpful for the evaluation of the disease severity and to adjust treatment plan timely. There were some characteristic clinical features pertaining to the severe disease course of SARS-COV-2 infected. The past medical history had an effect on disease mortality, which con rmed by the reports from Sohrabi et al [19], Guan et al [20] and Jordan et al [3]. In present study, the mean age of death cases was approximately 10 years older than that of survivors, which was similar to the previous study [21]. The gender prevailing of patients with severe COVID-19 was obvious, almost 3: 2 for male-female ratio in present study. This was in consistence with Chen's study, suggested that older men were more likely to be infected with SARS-COV-2, resulting in severe and even fatal respiratory diseases such as ARDS [5]. In the death group, the duration of symptoms prior to admission was longer than that of survival group, re ecting that the prolonged duration of symptom onset to hospitalization tended to poorer outcomes, which was in consistence with Liang's study [22]. In present study, the main initial symptoms of the severe patients were fever and/or cough. The dyspnea was frequently seen in the severe course of the patients with COVID-19 pneumonia, especially in critically severe patients, due to the severe lung lesions of the pneumonia. The incidence of ARDS in critically severe and death group was signi cantly higher than that in severe and survival group, respectively. The RR in critically severe patients was signi cantly higher than that in severe patients, as well as the SaO2 and FiO 2 , which may be due to mechanical ventilation. As to the blood routine, increased leukocyte and neutrophil counts, decreased lymphocytes count and ratio were remarkable features, especially for critically severe group and death group. Wang et al rstly uncovered the continuous increase of neutrophil counts in dead cases [23]. It may be related to cytokine storm induced by virus invasion. And lymphopenia suggested SARS-COV-2 might mainly target at lymphocytes and lead to the progression of the disease [5]. The infection related factors, including CRP, ESR, procalcitonin, IL-6, IL-8 and IL-10, were increased in the severe patients, especially in critically severe patients and death cases. The study from Ulhaq et al suggested that continuous measurement of circulating IL-6 levels may be of great signi cance in identifying disease progression in patients infected with COVID-19 [24]. A retrospective study suggested that elevated levels of IL-6 was related to the high mortality of COVID-19 infection [25]. A signi cantly higher incidence of septic shock and DIC was seen in critically severe and death group. This may be due to the imbalance of thrombin production caused by the activation of vascular endothelium, platelets and white blood cells, which occurred locally and systematically in the lung system of patients with severe pneumonia, resulting in brin deposition, tissue damage and microangiopathy [26]. It could be aggravated by the occurrence of septic shock [27,28]. It was reported that most of death cases and very few survivors have evidence of DIC, which occurred frequently in the deterioration of COVID-19 pneumonia and was often associated with mortality [29]. It also suggested that clinicians needed to be vigilant to identify the presence of DIC, especially in patients who had already experienced septic shock. There was some signi cant relationship between multiple organs injury and mortality. In critically severe and death patients, myoglobin, troponin, LDH and the incidence of cardiac injury were more higher than those in non-death patients, which was similar to the results of some previous studies on the relationship between the severity of illness and myocardial injury in patients with COVID-19, and was consistent with the correlation study between heart injury and death after SARS-CoV-2 infection [30,31]. Recent studies on COVID-19 had shown that the incidence of liver injury ranges from 14.8-53%, with the decreased albumin level in critically ill patients, and the incidence of liver injury might reach as high as 78.0% in the death cases of COVID-19 [32]. In this study, the incidence of liver injury in critically severe and death group was signi cantly increased compared to severe and survival group. This demonstrated that liver injury was related to the severity of the disease and mortality, which may be due to the cytokine storm, or the drug-induced liver damage [32,33]. In the present study, the eGFR, serum creatinine and serum urea nitrogen levels in the death group were signi cantly higher than those in the survival group, and there was a signi cant prevalence with AKI of patients in both the critically severe group and the death group. It was consistent with the study of Cheng et al, which showed that the development of AKI during hospitalization in patients with COVID-19 was related to in-hospital mortality [34]. The coagulation function and the serum Na + concentration changed in the severe course of COVID-19 pneumonia. Recently, the coagulation function was concerned and some related indices were studied between severe and non-severe patients [18,35]. In this study, these indices were further compared between severe and critically severe patients, and between survival and death patients. PT, APTT, INR and brinogen level were related to the severity of the disease, and the former three might be related to the mortality. According to previous study [36], hypernatremia was a common electrolyte disorder, which was related to long-term hospitalization and death, and was more common in critically ill patients. Abnormal changes in the central nervous system and mental state may be the causes of hypernatremia, while the digestive tract or urinary system disorder cannot be ruled out [32]. In addition, it may also be related to a large number of intravenous supplements of sodium-containing uids. As to the imaging performances, multiple lung lobes were involved in 98.6% patients, and whole lung lobes were involved in 84.06% patients. The proportion of patients in stage III increased signi cantly in death group, as well as comprehensive CT imaging scores in critically severe and death group. Our results showed that the severity of CT ndings was consistent with the severity of clinical course of the disease, as suggested by previous study [37]. Li et al [38] found that the development pattern of COVID-19 on CT images was similar to that of SARS or MERS. There were some common imaging features, so the nal diagnosis had to be combined with the clinical manifestation, epidemic history and laboratory examination. However, the advantage of convenient and rapid CT examination was irreplaceable. A study about critically ill patients with SARS-CoV-2 pneumonia demonstrated that early or repeated radiological examination is helpful to screen patients with SARS-CoV-2 pneumonia [39]. The previous studies referred to the mortality risk, calculated overall probability based on the infection and con rmed population [40,41]. However, the individual rough risk of death was important, especially in severe and critically severe patients, which might in uence the treatment plan and the response of clinicians or medical institutions. In the univariate logistic regression analysis, DIC showed as the best predictor (nearly 59 times of the death risk for the patients without DIC), followed by septic shock and cardiac injury. The prediction model included evidence of patient's age, cardiac injury, AKI and ARDS, among which the evidence of ARDS was the most powerful predictor. In the current COVID-19 epidemic, this prediction model might be a promising method to help clinicians to quickly identify and screen potential individuals who had a high-risk of death. There were several limitations of this study. First, the clinical and imaging data of patients were from multiple centers, hence the data were heterogeneous which might affect the statistical results. Additionally, some indices were missing too many values, which lead to that the P value could not be calculated in the test of group differences. Second, the initial imaging and follow-up imaging of the patients were lack of uniform standard. Some patients were only with chest X-ray because of the disease severity, and the follow-up interval was not identical. Finally, although both of the percent concordant and the area under curve of the prediction model were in a high level, a larger cohort study might be warranted to validate the accuracy and application value of the prediction model. Conclusion The clinical and imaging data re ect the severity of COVID-19 pneumonia and part of them were related to mortality. The prediction model of death risk might be a promising method to help clinicians to quickly identify and screen potential individuals who had a high-risk of death. Consent for publication The consent for publication of chest CT and radiograph images in this study had been obtained from relevant patients. Availability of data and materials left upper lung lobe surrounded by ground-glass opacity (a, b, arrows), and patchy ground-glass opacity in the right upper lung lobe (a, arrow). Figure 2 A 64-year-old woman diagnosed with severe COVID-19 on January 31th with vomiting and anorexia. a, c. Axial chest CT on February 1st showed progressed performances (stage II) with multiple lesions, including ground-glass opacity, consolidation and brosis, mainly distributed in the lower lung lobes. b, d. Axial chest CT on February 4th showed mild absorption of ground-glass opacity and consolidation. Figure 3 A 58-year-old man diagnosed with severe COVID-19 on January 30th with asthma. Imaging showed severe performances (stage III). a. Chest radiograph (a) on January 31th showed multiple high-density lesions with peripheral distribution and blurred boundary. b-d. Axial chest CT on February 4th showed diffusely distributed ground-glass opacity in bilateral lungs involving the whole lung lobes, with mild consolidation.
v3-fos-license
2020-10-29T09:03:26.251Z
2021-01-01T00:00:00.000
228844138
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7150/thno.50741", "pdf_hash": "0c6f1a9fdb71c81c8a504b471e3a24a407f95ec6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1392", "s2fieldsofstudy": [ "Medicine", "Engineering", "Materials Science" ], "sha1": "578ec5135a4c17655c409ce346b2212ea60a6d7d", "year": 2021 }
pes2o/s2orc
Progenitor cell-derived exosomes endowed with VEGF plasmids enhance osteogenic induction and vascular remodeling in large segmental bone defects Large segmental bone regeneration remains a great challenge due to the lack of vascularization in newly formed bone. Conventional strategies primarily combine bone scaffolds with seed cells and growth factors to modulate osteogenesis and angiogenesis. Nevertheless, cell-based therapies have some intrinsic issues regarding immunogenicity, tumorigenesis, bioactivity and off-the-shelf transplantation. Exosomes are nano-sized (50-200 nm) extracellular vesicles with a complex composition of proteins, nucleic acids and lipids, which are attractive as therapeutic nanoparticles for disease treatment. Exosomes also have huge potential as desirable drug/gene delivery vectors in the field of regenerative medicine due to their excellent biocompatibility and efficient cellular internalization. Methods: We developed a cell-free tissue engineering system using functional exosomes in place of seed cells. Gene-activated engineered exosomes were constructed by using ATDC5-derived exosomes to encapsulate the VEGF gene. The specific exosomal anchor peptide CP05 acted as a flexible linker and effectively combined the engineered exosome nanoparticles with 3D-printed porous bone scaffolds. Results: Our findings demonstrated that engineered exosomes play dual roles as an osteogenic matrix to induce the osteogenic differentiation of mesenchymal stem cells and as a gene vector to controllably release the VEGF gene to remodel the vascular system. In vivo evaluation further verified that the engineered exosome-mediated bone scaffolds could effectively induce the bulk of vascularized bone regeneration. Conclusion: In our current work, we designed specifically engineered exosomes based on the requirements of vascularized bone repair in segmental bone defects. This work simultaneously illuminates the potential of functional exosomes in acellular tissue engineering. Introduction Conventional tissue engineering consisting of biomaterial scaffolds and seed cells, and growth factors plays a pivotal role in inducing the regenerative repair of injured tissues and organs [1][2][3][4]. Among the three basic elements of tissue engineering, the seed cells are absolutely vital for initiating tissue regeneration. However, cell-based tissue engineering simultaneously has a number of drawbacks related to the cell source and activity, immunological rejection, long therapeutic times and high costs in clinical application [5,6]. Thus, cell-free tissue engineering has been extensively explored in the field of regenerative medicine as a safe, effective and off-theshelf strategy. Cell-based therapies are advanced therapeutic strategies that have brought promise for some severe diseases [7][8][9]. Unfortunately, cell-based therapy has still not become popularized in clinical applications. The safety and effectiveness of cell transplantation remains as issue of major focus. Exosomes derived Ivyspring International Publisher from therapeutic cells contain many functional microRNAs, proteins and bioactive molecules that have biological effects in modulating cell behaviours and activating signalling pathways, as well as directly participating in the treatment of diseases [10][11][12]. More importantly, the exosome itself is not a real cell and it can readily bypass the routine drawbacks of cell-based therapy [13][14][15][16]. Consequently, exosomemediated therapy has been considered to be an alternative to conventional cell therapy and has been used in cell-free tissue engineering. Exosome-mediated acellular bone regeneration has been documented to enhance skull regeneration in our previously published work [17]. Herein, we continue to investigate the potential of exosomeenhanced therapy on large segmental bone defects that cannot be repaired without the promotion of bone cells and the reconstruction of internal vasculature. To address the aforementioned requirements, a specifically engineered exosome has been constructed using ATDC5-derived exosomes to encapsulate a plasmid carrying the vascular endothelial growth factor (VEGF) gene. ATDC5 is a chondrogenic progenitor cell line that has been verified to exhibit significant osteogenic differentiation capacity [18]. VEGF is a crucial growth factor that has been shown to remodel the vasculature in many regeneration tissues [19,20]. Hence, the well-designed, engineered exosomes exhibit dual roles as an osteogenic matrix and a gene vector to potentially increase vascularized osteogenesis in segmental bone defects. Exosome-based therapy has been primarily performed via intravenous administration of exosomes designed with targeting molecules or the homing effects of stem cells [21][22][23]. However, the intravenous administration of functional exosomes results in minimal accumulation at the defect site, and can also affect the obstruction risk of some blood-rich organs [24][25][26]. Thus, in this work, we combined engineered exosomes with polycaprolactone (PCL) 3D-printed porous bone scaffolds to ensure sustainable and stable therapy at the local defect site. PCL, as an FDA-approved, biodegradable material, has been extensively applied in bone tissue engineering with minimal provocation of inflammatory and immunological responses, perfect biocompatibility and nontoxic degradation [27,28]. Noticeably, a flexible and specific connection between the engineered exosomes and the 3D-printed scaffolds is an important precondition required to realize effective topical therapies. The exosomal anchor peptide CP05 has been reported to specifically bind to the antigen CD63, which is a tetraspanin enriched on the exosome surface and has been used as an exosomal marker [29,30]. Hence, we used CP05 as a flexible linker to modify the 3D-printed scaffolds to promote the grafting efficiency of the engineered exosomes. The resulting exosome-activated bone scaffolds had dual functions in inducing osteogenic differentiation and remodelling vasculature formation in vivo (Figure 1). Figure 1. General idea of engineered exosome enhanced therapies on osteogenesis and angiogenesis. (A) 3D-printed porous PCL scaffolds were modified with 1,6-hexanediamine to generate the amino group on PCL scaffolds that were subsequently modified with the exosomal anchor peptide CP05. (B) Engineered exosomes were fabricated by encapsulating the VEGF plasmid DNA into ATDC5-derived exosomes. The well-designed bone scaffolds were constructed by combining the engineered exosomes with the CP05 modified 3D-printed scaffolds, and eventually implanted into a rat radial defect model to promote osteogenesis (C) and angiogenesis (D). Identification and characterization of exosomes Based on observations of TEM images, exosomes derived from ATDC5 cells had a spherical morphology with a diameter of 100 nm (Figure 2A). Two specific markers of exosome-related proteins, CD63 and TSG101, were detected by western blot ( Figure 2B). The size and size distribution of the exosomes was 114.2 ± 1.8 nm (n = 3), based on nanoparticle tracking analysis (NTA) measurements ( Figure 2C). The zeta potential of the exosomes was -32.0 ± 1.5 mV, based on the dynamic light scattering (DLS) analysis ( Figure 2D). Successful grafting of the CP05 exosomal anchor peptide with the exosomes was verified via flow cytometry, and the grafting efficiency was as high as 90.96% ( Figure 2E). Local expression of gene-activated engineered exosomes in vitro The plasmid gene of pEGFP-kozVEGF165 (VEGF) was verified by agarose gel electrophoresis; the main bands were located between 2000 bp and 3000 bp, which were consistent with the originally designed fragment ( Figure S1). The VEGF plasmid was subsequently encapsulated by exosomes via electroporation to generate the gene-activated engineered exosomes. After incubating the engineered exosomes with rBMSCs for 24 h and 48 h, we found that the transfection efficiency was significantly elevated with increased culture time ( Figure 3A and Figure S2). Tube formation assays with HUVECs further confirmed that the supernatants of the gene-activated engineered exosomes induced more tube formation than the pure exosome negative controls, and tube formation increased further when the culture time was increased from 1 h to 3 h ( Figure 3B and Figure S3). In addition, the relative expression of intracellular VEGF in the experimental group was approximately 2800-fold higher compared to the negative control based on qRT-PCR analysis ( Figure 3C). The concentration of secreted VEGF protein by enzyme linked immune sorbent assay (ELISA) was approximately 10 pg/mL, whereas secreted VEGF protein was undetectable in the control group ( Figure 3D). Characterization and modification of 3Dprinted scaffolds The 3D-printed scaffolds exhibited a micro-scale porous structure with pore diameters of approximately 250 μm. The structure was able to mimic that of trabecular bones, taking advantage of the exchange of nutrients and oxygen and the formation of neovascularization ( Figure 4A). The compressive strength of the 3D-printed PCL scaffolds was 4.8±0.6 MPa ( Figure S4). A positive charged amine group (-NH2) was first introduced to the The engineered exosomes (EXOs-VEGF) was successfully transfected into rBMSCs after culturing for 24 h and 48 h, and the positive gene expression of VEGF plasmid (pEGFP-kozVEGF165) was elevated with increased culture time. The nuclei of rBMSCs were stained with DAPI (blue), the cytoskeletons of rBMSCs were stained with ActinRed (red). (B) Tube formation assays with HUVECs was performed using the serum-free supernatant of rBMSCs with pure exosome (EXOs) and EXOs-VEGF for 1 h and 3 h. The results further demonstrated that the EXOs-VEGF extremely promoted tube formation of HUVECs in vitro. The cytoskeletons of HUVECs were stained with FITC (green). (C, D) Both qRT-PCR and ELISA clearly confirmed that the relative expression of VEGF in EXOs-VEGF was ultra-higher than that in pure EXOs (Independent-sample t-tests; **, p < 0.01; ***, p < 0.001 compared to EXOs group) (n = 4). 3D-printed scaffolds by 1,6-hexanediamine, and the XPS results clearly demonstrated a nitrogen peak detected for the 3D-printed PCL scaffolds modified with amine groups ( Figure 4B). In addition, the ninhydrin coloration assay indicated an obvious colour change before and after the amino group coating ( Figure 4C). All of these results verified that the amino groups were successfully coated onto the surface of the 3D-printed PCL scaffolds. To maintain a stable and flexible link between the engineered exosomes and the 3D-printed scaffolds, the CP05 exosomal anchor peptide used to modify the 3D-printed scaffolds. Our findings showed that the scaffolds modified with an amine group (PCL NH2 + ) exhibited a much greater absorption of CP05 than the scaffolds without the amine group modification (PCL NH2 -). CP05 was conjugated to Alexa Fluor 488 for observation by confocal laser scanning microscopy ( Figure 4D). The graft efficiency of CP05 onto the PCL scaffolds was 26.7% (wt/wt). In vitro interaction between cells and scaffolds 3D-printed PCL scaffolds modified with the CP05 anchor peptide (PCL-CP05) exhibited a higher affinity for the engineered exosomes than the control scaffolds without the CP05 modification ( Figure 5A-B). The CCK-8 assay results showed that there was no significant difference among the different groups at the same time point ( Figure 5C). In addition, the scaffold surface was able to support cell adhesion and spreading, indicating good biocompatibility of the 3D-printed PCL scaffolds ( Figure 5D). The graft efficiency of the engineered exosomes onto the PCL scaffolds was 41.7% (wt/wt). Cellular uptake assays showed that a large number of DiI-labelled exosomes were internalized and distributed in the perinuclear region of rBMSCs ( Figure 5E). The graft efficiency of engineered exosomes connected onto 3D-printed bone scaffolds was significantly elevated with the help of the anchor peptide CP05. (C) Cell proliferation showed that there was no difference on biocompatibility before and after the modification with CP05. (one-way ANOVA followed by Tukey's post hoc test; NS, no significance compared to control group) (n = 4). (D) 3D-printed scaffolds were able to well support cell adhesion based on the analysis of confocal Z-stacks image and SEM image. (E) EXOs were internalized into 3D-printed scaffolds based on cell uptake. The nuclei of rBMSCs were stained with DAPI (blue), the cytoskeletons of rBMSCs were stained with FITC (green), and the EXOs were stained with DiI (red). Osteogenic differentiation induced by engineered exosomes ALP staining has been widely assessed as an early maker of osteogenic differentiation. Our findings showed there was no obvious difference in the ALP staining of each group on day 7 ( Figure S5). However, the alizarin red staining showed obvious mineralized nodules in both the exosomes and the engineered exosomes on the 14 day time point ( Figure 6A and Figure S6). There was also positive immunefluorescent staining for the osteogenic marker OCN on the 14 day time point (Figure 6B and Figure S6). The qRT-PCR results further indicated that both the ATDC5-derived exosomes and the VEGF engineered exosomes could promote a certain degree of rBMSC osteogenic differentiation on day 7 ( Figure 6C). Among the genes analysed, the expression levels of ALP and Col1a1 were significantly upregulated in the exosome-mediated cultures. There were no major differences in the expression of Runx2 and OCN between the ATDC5-derived exosomes and the engineered exosomes with encapsulated VEGF. All of these results implied that the ATDC5-derived exosomes enhanced the osteogenic capacity, however, the introduction of VEGF into the engineered exosomes did not obviously affect their osteogenic tendency. The osteogenic marker OCN was positively stained with Immunofluorescence staining, and cell nuclei and cytoskeletons were stained with DAPI (blue) and FITC-phalloidin (green), respectively. (C) QRT-PCR analysis of ALP, Col1a1, Runx2 and OCN further confirmed that both EXOs and EXOs-VEGF were able to promote osteogenic differentiation of rBMSCs and there was no significant difference between EXOs and EXOs-VEGF (one-way ANOVA followed by Tukey's post hoc test; *, p < 0.05; **, p < 0.01; ***, p < 0.001 compared to blank group) (n = 3). In vivo evaluation of osteogenesis and angiogenesis We continued to investigate the performance of the gene-activated engineered exosomes in vivo using a rat radial defect model ( Figure S7A). Twelve weeks after implantation, the scaffold was integrated into the native bone tissue ( Figure S8A) and many newly formed tissues filled in the scaffold pores ( Figure S8B). In addition, the compressive strength of implanted PCL scaffolds was greatly enhanced due to the newly formed bone at 12 weeks after implantation ( Figure S4). Reconstructed micro-CT images revealed that the bone regeneration in the experimental group was significantly better than that in the other groups at the 6 and 12 week time points (Figure 7A and Figure S7B). More importantly, compared with other groups, a bulk mass of new bone was generated only in the experimental group at 12 weeks after implantation. Additionally, the micro-CT images were quantified, including the bone volume (BV), bone tissue volume to total tissue volume ratio (BV/TV), and trabecular thickness (Tb.Th), and the results were highly consistent with the 3D reconstructed micro-CT images ( Figure 7B). HE staining further indicated the presence of a bulk of newly formed bone tissue and a number of blood vessels were observed in the experimental group at the endpoint 12 weeks after scaffold implantation ( Figure 7C and Figure S7C). In contrast, the control group scaffolds were mainly filled with soft connective tissues consisting of fibrous connective tissue with randomly oriented low-density collagen fibres and blood vessels. Masson's Trichrome staining further indicated that more mature collagen fibres were present in the experimental group compared to the control groups ( Figure 7D and Figure S7D). The experimental group also had positive staining for the angiogenic marker CD31 by immunofluorescence ( Figure 7E and Figure S7E). All of these results solidly demonstrated that the welldesigned, engineered, exosome-activated scaffolds were able to successfully induce vascularized osteogenesis. Discussion Acellular therapy has attracted increasing attention due to its ability to bypass some of the inherent issues associated with conventional cellbased therapy, such as the cell source, cell bioactivity, cell immunity, long therapeutic times and high costs. Recent progress in cell-free therapies has highlighted the potential use of exosomes as a replacement for functional cells. Lin and colleagues explored the therapeutic effect of MSC-derived exosomes in a 3D-printed scaffold for early OA therapeutics, and demonstrated that MSC-derived exosomes could enhance mitochondrial biogenesis both in vitro and in vivo [31]. Yang et al. found that a high fat diet altered the miRNA profile of visceral adipose tissue-derived exosomes to exacerbate colitis severity via the presence of proinflammatory miRNAs in high fat diet fed mice [32]. Xu and his group found that the exosomes derived from clear cell renal cell carcinoma (CCRCC) patients transported miR-19b-3p into CCRCC cells and were able to initiate the EMT, promoting metastasis [33]. Functional exosomes from the ATDC5 chondrogenic progenitor cell line have been verified to exhibit significant osteogenic differentiation capacity [18]. In addition, ATDC5 as a mature cell line has been extensively used in bone tissue engineering due to its quick proliferation and stability in culture. Thus, here, we attempted to explore cell-free tissue engineering by constructing novel engineered exosomes that can be used as both an osteogenic matrix and a gene vector (Figure 1). Our findings show that the ATDC5-derived exosomes exhibit osteogenic capacity similar to that of ATDC5 cells, and they can enhance the osteoblastic differentiation of rBMSCs in vitro (Figure 6). In addition, to strictly control the quality and stability of the exosomes, we applied several approaches involving a uniform cell source, culture parameters and isolation parameters with ultracentrifugationbased techniques. Therefore, nanoscale exosomes from functional cells may be considered an alternative bioactive molecule for cell-free enhanced therapy. Vascularized osteogenesis plays a pivotal role in promoting the regenerative repair of segmental bone defects, as the lack of vasculature can cause severe necrosis in large bone defects [34][35][36]. As a crucial growth factor, VEGF has been extensively utilized to induce the vasculature reconstruction [19,20]; however, most of the current work has primarily utilized the VEGF protein, which can cause some problems due to its easy degradation, short half-life, systematic toxicity, and high cost [37]. Thus, we suggest using the VEGF gene in place of the VEGF protein. Furthermore, exosomes have been explored as excellent biovectors to deliver diverse genes and drugs in sustained and enhanced therapies [38,39]. In this study, we constructed a novel, engineered exosome by encapsulating the VEGF gene in native, progenitor cell-derived exosomes. After transfection of the engineered exosomes, both PCR and ELISA assays showed there was the significant difference between pure exosomes and VEGF-containing exosomes. Our results verified that both native exosomes and engineered exosomes can promote the osteogenic differentiation of rBMSCs to similar levels, indicating that the introduction of VEGF did not impact the osteogenic differentiation capability of native exosomes (Figure 6). Additionally, compared to the other groups, including the exosome group, the engineered exosomes exhibited the best osteogenesis in the rat radial defect model. This indicated that the engineered exosomes facilitate both angiogenesis and osteogenesis in vivo. Consequently, our gene-activated engineered exosomes have dual functions as an osteogenic matrix and a gene vector. 3D-printed porous bone scaffolds are beneficial to promote the ingrowth of new tissues, and they provide a 3D space for vasculature remodelling [1,[40][41][42][43]. In this study, we considered it a vital precondition that nanoscale exosomes were combined with micro-scale porous scaffolds. We eventually utilized the CP05 exosomal anchor peptide as a linker molecule to establish a stable and flexible connection between the engineered exosomes and the 3D-printed porous scaffolds ( Figure 5A). It has been documented that CP05 is a CD63-specific exosomal anchor peptide, while CD63 as an exosomal marker and a tetraspanin enriched on the exosome surface [29]. Thus, CP05 paves a new avenue for exosome engineering due to its direct and effective modification, cargo loading, and exosome capture. Our results confirm that the CP05 modification greatly improves the grafting efficacy between the engineered exosomes and the bone scaffolds ( Figure 5B). Topical delivery and controllable release of functional exosomes at the defect site are the primary challenges for segmental large bone defects [44][45][46][47]. First, local therapy of engineered exosomes can bypass several of the hurdles associated with traditional intravenous injection, including the lack of accumulation at the defect site and the risk of obstructing some blood-rich organs [24][25][26]. Second, compared to intravenous administration, the local release of functional exosomes via directed transplantation can greatly improve the treatment efficiency [45]. In our current work, we combined engineered exosomes with 3D-printed bone scaffolds to achieve local therapy via the directed transplantation of exosome-mediated bone scaffolds. In vivo animal evaluations clearly demonstrated that the well-designed scaffolds could successfully induce vascularized bone regeneration (Figure 7 and Figure S7). Conclusions Acellular enhanced therapy is a promising strategy, and its clinical application could bypass a series of issues associated with conventional cellbased therapy, including immunological rejection, bioactive maintenance, long therapeutic times and high costs. In this study, we designed and constructed engineered exosomes using ATDC5-derived exosomes with an encapsulated VEGF gene plasmid, which exhibited dual functions in inducing rBMSC osteogenic differentiation and in modulating the controlled delivery of the VEGF gene. The engineered exosomes were combined with 3D-printed porous bone scaffolds via a specific linker (the CP05 anchor peptide) to effectively increase osteogenesis and angiogenesis in segmental bone defects. Hence, our current work provides an alternative use for functional exosomes in replacing seed cells and constructing cell-free tissue engineering with similar and equivalent therapeutic potential for vascularized bone remodelling. Isolation and characterization of ATDC5derived exosomes The mouse chondrogenic progenitor cell line, ATDC5, was purchased from BNCC (Suzhou, China). The ATDC5-derived exosomes were isolated and characterized as described before [17]. In brief, the serum-free medium of ATDC5 was centrifuged at 300×g for 15 min and 2000×g for 20 min at 4 °C to remove cellular debris. The supernatant was filtered through a 0.22 μm filter (Millipore, Merck, Germany) and ultracentrifuged at 100000×g in a 70Ti rotor for 3 h to collect exosomes (Ultracentrifuge, Beckman Coulter, L-80 XP). The protein concentration of the exosomes was quantified by BCA protein assay kit (Beyotime, China). The morphology of the exosomes was visualized by transmission electron microscopy (TEM, Hitachi, Japan). Nanoparticle tracking analysis (NTA, Particle-Metrix, GA) was performed to measure the nanoparticle size and size distribution. The zeta potential distribution of the exosomes was further investigated by DLS (Zetasizer Nano ZS90, Malcern, UK). The expression of TSG101 and CD63 on exosomes was detected by western blot. The specific binding efficiency between exosomes and the exosomal anchor peptide (Alexa Fluor 488 conjugated CP05, Sigma-Aldrich) was identified by flow cytometry (CytoFLEX, Beckman Coulter, USA). Preparation of VEGF plasmid gene and its electroporation into exosomes The plasmid gene of pEGFP-kozVEGF165 (VEGF) was a gift from Professor Kun Ma lab (Dalian University of Technology, China). The VEGF plasmid DNA was isolated and identified as previously reported [28]. In brief, the VEGF plasmid was propagated in E. coli DH5α cells. A single isolated colony of E. coli DH5α from a freshly streaked plate was picked to inoculate an appropriate volume of LB medium containing the appropriate antibiotic, and then incubated for overnight with vigorous shaking (~300 rpm, 37 °C; shaking incubator). The VEGF plasmid was isolated and purified using the Endo-Free Plasmid Mini Kit II (OMEGA, USA) as described in the kit manual. The DNA concentration was determined using a NanoDrop 2000 Ultramicro spectrophotometer (Thermo, USA). For electroporation, 30 µg of exosomes and 10 µg of VEGF were mixed in 400 μL of electroporation buffer (1.15 mM potassium phosphate pH 7.2, 25 mM potassium chloride, 21% Optiprep) and subsequently electroporated at 1000 V, 5 ms with one pulse condition using a Gene Pulser Xcell Electroporation System (BioRad, USA). After electroporation, the engineered exosomes were purified at 25,000×g for 1 h at 4 °C. Transfection and expression of VEGF The rBMSCs were seeded and transfected with exosomes with encapsulated VEGF (10 μg/mL) when 80% confluency was reached. The gene transfection of VEGF was observed via the expression of enhanced green fluorescence protein (EGFP), which was linked to the VEGF plasmid. Cell culture supernatants without the serum were collected to evaluate in vitro in HUVEC tube formation assays as previously reported [17]. In addition, qRT-PCR and ELISAs were performed to quantify the expression of VEGF as previously reported [17]. Fabrication and modification of 3D-printed scaffold A three-dimensional (3D) model of the scaffold was created using Solidworks 2018 software. Polycaprolactone (PCL) wires with a molecular weight of 50 kDa (Sigma, US) were printed at a printing temperature of 180 °C and a hot bed temperature of 25 °C using a 3D printer (Allcct, China). The PCL scaffolds were precisely cut into smaller pieces for further analysis. The surface and cutting surface of the 3D-printed PCL scaffold was observed using a field emission scanning electron microscope (GeminiSEM 300, ZEISS, Germany). The 3D-printed PCL scaffolds were coated with amino groups by immersing them in 10 mL of 10% (w/v) 1,6-hexanediamine solution for 1 h at 37 °C. After that, the scaffolds were gently washed with ultrapure water and dried in a vacuum oven at 30 °C overnight. The obtained scaffolds were subsequently confirmed by X-ray photoelectron spectrum analysis (XPS; Kratos, UK). As a qualitative analysis of the amino groups, ninhydrin coloration assays were carried out by immersing the scaffolds in 2 mL of 2% (w/v) ninhydrin solution for 30 min at 50 °C. To graft the CP05 exosomal anchor peptide onto the 3D-printed PCL scaffolds, the amino group coated scaffolds were first incubated in 4 mL of a CP05 peptide solution (0.75 mg/mL) under the activation of 1.8 mg of 1-(3-dimethylaminopropyl)-3-ethyl carbonamide hydrochloride (EDC) and 2.7 mg of n-hydroxysuccinimide (NHS) at 37 °C overnight. The combination of the scaffolds and CP05 conjugated to Alexa Fluor 488 was measured by confocal laser scanning microscopy (FV3000, Olympus, Japan). Finally, the CP05 modified scaffolds were incubated with exosomes with encapsulated VEGF plasmid DNA to form gene-activated engineered exosomes. Evaluation of the biocompatibility between cells and scaffolds in vitro Cell proliferation and adhesion assays were carried out to investigate the biocompatibility of the 3D-printed PCL scaffolds. Briefly, rBMSCs were Reader (PerkinElmer, Massachusetts, USA). In addition, the rBMSCs were seeded onto 3D-printed PCL scaffolds and cell adhesion was evaluated by SEM and confocal laser scanning microscopy. Cellular uptake and intracellular internalization of exosomes To further explore the intracellular internalization of exosomes, rBMSCs were seeded onto PCL scaffolds conjugated with DiI-labelled exosomes and incubated for 48 h. After washing three times in PBS, the rBMSCs were fixed in 4% paraformaldehyde for 15 min and then washed again. Cell nuclei were stained with DAPI for 10 min and the cytoskeletons of rBMSCs were stained with FITC for 1 h at 37 °C. The cellular distributions of the exosomes were imaged using confocal laser scanning microscopy. Exosome-mediated osteogenic differentiation in vitro To investigate the exosome-induced osteogenic differentiation of stem cells, rBMSCs from passage 2 were cultured with osteogenic medium containing 5% FBS, 0.1 μM dexamethasone, 10 mM β-glycerophosphate and 50 μg/mL ascorbic acid. Meanwhile, 10 μg/mL of exosomes and engineered exosomes loaded with VEGF were separately set as the experimental groups. The medium and exosomes were changed every 2 days. After culture for 7 days, alkaline phosphatase (ALP) staining (BCIP/NBT solution, Ameresco, USA) was performed following the kit protocol. Alizarin red staining (Solarbio, China) was carried out after culturing for 14 days. The alkaline phosphatase (ALP), collagen type 1 (Col1a1), osteocalcin (OCN), and runt-related transcription factor 2 (Runx2) gene expression levels were assessed by qRT-PCR and normalized to GAPDH; the gene primers used are listed in Table S1. One representative marker of OCN (Proteintech, 1:500) was investigated with the analysis of immunofluorescence staining. Construction of animal model To track the performance of engineered exosome-mediated bone scaffolds in vivo, forty male (SD) rats with an average weight of 180 g were used to perform the radial defect model. Briefly, the rats were first anesthetized by the isoflurane inhalation anaesthesia (RWD, Shenzhen, China; 2.0-2.5% concentration), and then a segmental defect (~5 mm long) was created in the central radius of each animal model. The experimental groups, PCL, PCL-CP05, PCL-CP05~EXOs (10 μg) and PCL-CP05~EXOs-VEGF (10 μg) were then loaded into the defect site. A blank control was also created by loading nothing into the defect site. The experiments for each group were repeated five times and each group was harvested at 6 weeks and 12 weeks after surgery. Animal experiments were carried out in compliance with the protocol approved by the Institutional Animal Care and Use Committee of HUST.
v3-fos-license
2023-05-29T15:09:25.133Z
2023-05-26T00:00:00.000
258953060
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/foods12112147", "pdf_hash": "02bc74436a540339261578d9dee0d14180d97165", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1393", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "90c80232a533986db25f72cab0270e1e9ab4383e", "year": 2023 }
pes2o/s2orc
Effect of Preliminary Treatment by Pulsed Electric Fields and Blanching on the Quality of Fried Sweet Potato Chips The effects of pulsed electric fields (PEF) and blanching pretreatments on frying kinetics, oil content, color, texture, acrylamide (AA) content, and microstructure have been investigated in this paper. The total PEF pretreatment duration was tPEF = 0.2 s with an intensity of E = 1 kV/cm; blanching was studied at 85 °C for 5 min. The results demonstrated that pretreatment significantly reduced the moisture ratio and oil content by 25% and 40.33%, respectively. The total color change ΔE value of the pretreated samples was lower than that of the untreated samples. In addition, pretreatment increased the hardness of the sample after frying, and the AA content in the fried samples pretreated with PEF + blanching was reduced by approximately 46.10% (638 μg/kg). Finally, fried sweet potato chips obtained by the combined pretreatment exhibited a smoother and flatter cross-sectional microstructure. Introduction Fried foods are significantly popular worldwide due to their unique flavors. Sweet potato is a naturally nourishing food rich in protein, fat, polysaccharides, phosphorus, calcium, potassium, carotene, vitamin A, vitamin C, vitamin E, vitamin B 1 , vitamin B 2 , and eight amino acids. Sweet potatoes are widely cooked by deep-frying and consumed as French fries and chips [1,2]. Studies have shown that a high fat and calorie intake may lead to metabolic disorders, resulting in an increased risk of hypertension, cardiovascular disease, diabetes, and cancer [3][4][5]. Therefore, there is currently a strong demand for high-quality fried foods that are likely to reduce the oil intake and formation of carcinogen acrylamide content. Recommended mitigation measures, such as magnetic fields, microwaves, and UV-C, have been used to control the acrylamide content in potatoes and potato semi-finished products. Sobol et al. found that potato tubers exposed to UV-C radiation caused an increase in the acrylamide content; however, soaking the semi-product in water resulted in a decrease in the acrylamide content in French fries [6]. Polysaccharides (alginate, pectin, and chitosan) are used in food frying processes and can inhibit the formation of acrylamide by up to 54%, 51%, and 41%, respectively [7]. Conventional blanching pretreatment is widely used to improve the quality of fried sweet potato chips [8]. During blanching, gelatinization of starch can reduce oil intake during the frying of potato chips [9]. In addition, the sugar and asparagine contents of potatoes can be significantly reduced by hot blanching pretreatment, thus reducing the formation of acrylamide [9,10]. However, heat treatment involves high energy consumption and may lead to unexpected quality changes, such as the loss of soluble nutrients and nutrient deactivation (polyphenols) [11,12]. Pulsed electric field (PEF) is a novel non-thermal physical treatment technique that discharges a sample between two electrode plates by applying a high-voltage pulse, mainly utilizing an electroporation mechanism [13][14][15]. PEF pretreatment induces the formation of transient microspores in the lipid bilayer of the cell membrane, which improves cell permeability and forms more Materials The sweet potatoes (Liu Ao Red Sweet Potato) were purchased from a local market in Shanghai, China and were stored in a refrigerator at 4 • C. All experimental data were collected within one week of purchase. Fresh sweet potatoes were cleaned, sliced (27 mm in diameter and 3 mm in thickness), and sampled using a stainless steel circular mold. The initial moisture content of sweet potatoes (W i = 3.67 ± 0.1 db or 0.785 ± 0.1 wb) was determined by drying the samples at 105 • C in the oven (DHG-9245A, HuiTai, Shanghai, China). PEF Pretreatment A PEF generator delivering monopolar pulses (1500 V-1A, Service Electronique USST, Shanghai, China) was used. Figure 1 presents the PEF-applied treatment procedures for sweet potato slices. The processing chamber (Teflon cylindrical tube, industrial processes workshop, USST, Shanghai, China) consisted of two parallel stainless steel electrodes and had a diameter of 41.5 mm and a depth of 100 mm. The electric field intensity was E = 1k V/cm and there were a series of N = 200 trains. Each train consisted of n (=50) pulses with a pulse width of t i = 20 µs and a frequency of 10 Hz. The total time of the PEF treatment was calculated as t PEF = N·n·t i = 0.2 s. The applied protocol allowed for obtaining a high level of electroporation of sweet potato tissue based on our preliminary studies. The temperature elevation inside the samples never exceeded 5 • C. The energy input of the PEF pretreatment was 9.47 ± 0.5 kJ/kg, calculated as follows [21]: where, U is the voltage (V), I is the flowing current (A) obtained from the display screen of the generator, t PEF is the total duration of the PEF treatment (s), and M (kg) is the mass of the sample. Blanching Pretreatment Sweet potato slices were heated and stirred at 85 • C for 5 min on a ceramic heating plate HJ-2A (Guohua, Changzhou, China) by following the method of Timolsina et al. [22], with slight modifications. After the blanching treatment, the surface water was removed, and the potato slices were cooled to an ambient temperature for frying. Blanching Pretreatment Sweet potato slices were heated and stirred at 85 °C for 5 min on a ceramic heating plate HJ-2A (Guohua, Changzhou, China) by following the method of Timolsina et al. [22], with slight modifications. After the blanching treatment, the surface water was removed, and the potato slices were cooled to an ambient temperature for frying. Frying Different pretreated and untreated sweet potato slices were then fried in hot sunflower oil contained in an electro-thermal blast furnace at 150 °C (HY-81, Foshan Nanhai Gangyang Electromechanical Equipment Co., Ltd., Foshan, China) with a sample/oil mass ratio of 1/60 for 6 min. Previous results showed that when frying at 150 °C, the PEF pretreatment could significantly decrease the acrylamide content by 70% of potato chips [23]. The mass (m) of the samples was periodically controlled during frying. The moisture ratio (MR) of a sample during frying was calculated as follows: where, mi is the initial moisture content and mt is the moisture content after frying that was obtained using a moisture analyzer (HC103, Mettler Toledo Instruments Co., Ltd., Shanghai, China). Oil Content The oil content of the fried sweet potatoes chips was measured using a low-field nuclear magnetic resonance (LF-NMR) spectrometer (PQ001-020-015V, Niumag Corporation, Suzhou, China), with a frequency field of 20 MHz and a temperature of 32 ± 0.01 °C. For these measurements, the nuclear magnetic field strength was 0.5 ± 0.08 T. It was measured by placing the sample in a 15 mm glass tube and inserting it into the NMR probe. The Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence was applied to measure the transverse relaxation time (T2). Typical pulse parameters included a sampling frequency of 250 kHz, repetition time of 2000 ms, echo count of 5000, echo time of 1 ms, and repeat scan times of 4 [24]. Standard curves were constructed as follows. The sunflower oil was weighed in 15 mm glass tubes with weights of 0.1, 0.2, 0.3, 0.4, and 0.5 g, which were measured in a water bath at 32 °C for 5 min to obtain the amplitude corresponding to the different masses of oil. Figure 2a presents the distribution of the transverse relaxation time (T2) spectra for different masses of sunflower oil. The relaxation signal shown in the figure can be entirely attributed to the protons in the oil molecule and is composed of a small and large characteristic peak. The oil peak emerged in the range of 12-464 ms, providing a basis for distinguishing the proton signals of water and oil in the sample. The signal amplitude increased as the oil mass increased. The linear equation fitted to the mass and amplitude of the oil was y = 660.59172x − 2.03436, with R 2 = 0.99983, Frying Different pretreated and untreated sweet potato slices were then fried in hot sunflower oil contained in an electro-thermal blast furnace at 150 • C (HY-81, Foshan Nanhai Gangyang Electromechanical Equipment Co., Ltd., Foshan, China) with a sample/oil mass ratio of 1/60 for 6 min. Previous results showed that when frying at 150 • C, the PEF pretreatment could significantly decrease the acrylamide content by 70% of potato chips [23]. The mass (m) of the samples was periodically controlled during frying. The moisture ratio (MR) of a sample during frying was calculated as follows: where, m i is the initial moisture content and m t is the moisture content after frying that was obtained using a moisture analyzer (HC103, Mettler Toledo Instruments Co., Ltd., Shanghai, China). Oil Content The oil content of the fried sweet potatoes chips was measured using a low-field nuclear magnetic resonance (LF-NMR) spectrometer (PQ001-020-015V, Niumag Corporation, Suzhou, China), with a frequency field of 20 MHz and a temperature of 32 ± 0.01 • C. For these measurements, the nuclear magnetic field strength was 0.5 ± 0.08 T. It was measured by placing the sample in a 15 mm glass tube and inserting it into the NMR probe. The Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence was applied to measure the transverse relaxation time (T 2 ). Typical pulse parameters included a sampling frequency of 250 kHz, repetition time of 2000 ms, echo count of 5000, echo time of 1 ms, and repeat scan times of 4 [24]. Standard curves were constructed as follows. The sunflower oil was weighed in 15 mm glass tubes with weights of 0.1, 0.2, 0.3, 0.4, and 0.5 g, which were measured in a water bath at 32 • C for 5 min to obtain the amplitude corresponding to the different masses of oil. Figure 2a presents the distribution of the transverse relaxation time (T 2 ) spectra for different masses of sunflower oil. The relaxation signal shown in the figure can be entirely attributed to the protons in the oil molecule and is composed of a small and large characteristic peak. The oil peak emerged in the range of 12-464 ms, providing a basis for distinguishing the proton signals of water and oil in the sample. The signal amplitude increased as the oil mass increased. The linear equation fitted to the mass and amplitude of the oil was y = 660.59172x − 2.03436, with R 2 = 0.99983, indicating that the mass of the oil linearly and sufficiently correlated with the peak area ( Figure 2b). The intensity of the peak signal is linearly related to the mass of oil. Therefore, the correlation between the peak intensity and mass of oil can be determined by calibrating the sample to obtain the oil content (O f ) of the sample. indicating that the mass of the oil linearly and sufficiently correlated with the peak area ( Figure 2b). The intensity of the peak signal is linearly related to the mass of oil. Therefore, the correlation between the peak intensity and mass of oil can be determined by calibrating the sample to obtain the oil content (Of) of the sample. Color The color of the samples was determined using a colorimeter (CR-400; Konica Minolta Investment Co., Ltd., Shanghai, China). The color parameter coordinates L* (whiteness or brightness), a* (redness or greenness), and b* (yellowness or blueness) were used to describe the color of the samples [25]. Hunter values (L*, a*, b*) were monitored on the surfaces of untreated and pretreated fresh and fried samples. The total color difference ΔE was used to express the overall color change during the thermal process and was calculated by using Equation (2) as follows: where, * , * , and * indicate the color parameters of the fresh samples; * , * , and * indicate the color parameters of the fried samples. Texture To obtain the hardness of the chips, a texture analyzer (TA-XT PlusC, Stable Micro Systems Co. Ltd., Manchester, UK) with the texture profile analysis (TPA) mode was used [23]. The sample was placed just below the probe and tested with a P/0.25 S spherical probe while maintaining the sample placed in the same direction for each test. The parameters were set as follows: pre-test speed of 1.0 mm/s, mid-test speed of 0.5 mm/s, posttest speed of 10.0 mm/s, distance of 1.5 mm, and trigger force of 5 g. The average force between the first peak and 1 s was expressed as the hardness of the potato chips. The textural parameters of hardness were calculated from the TPA curve using the Texture Exponent software (Stable Micro Systems Co. Ltd., Manchester, UK). Color The color of the samples was determined using a colorimeter (CR-400; Konica Minolta Investment Co., Ltd., Shanghai, China). The color parameter coordinates L* (whiteness or brightness), a* (redness or greenness), and b* (yellowness or blueness) were used to describe the color of the samples [25]. Hunter values (L*, a*, b*) were monitored on the surfaces of untreated and pretreated fresh and fried samples. The total color difference ∆E was used to express the overall color change during the thermal process and was calculated by using Equation (2) as follows: where, L 0 * , a 0 * , and b 0 * indicate the color parameters of the fresh samples;L * , a * , and b * indicate the color parameters of the fried samples. Texture To obtain the hardness of the chips, a texture analyzer (TA-XT PlusC, Stable Micro Systems Co. Ltd., Manchester, UK) with the texture profile analysis (TPA) mode was used [23]. The sample was placed just below the probe and tested with a P/0.25 S spherical probe while maintaining the sample placed in the same direction for each test. The parameters were set as follows: pre-test speed of 1.0 mm/s, mid-test speed of 0.5 mm/s, post-test speed of 10.0 mm/s, distance of 1.5 mm, and trigger force of 5 g. The average force between the first peak and 1 s was expressed as the hardness of the potato chips. The textural parameters of hardness were calculated from the TPA curve using the Texture Exponent software (Stable Micro Systems Co. Ltd., Manchester, UK). Liquid Chromatography-Tandem Mass Spectrometry/Mass Spectrometry Analysis of Acrylamide The acrylamide determination of the fried sweet potatoes was performed as described by Liu et al. [23], with slight modifications. An LC-MS/MS system (Agilent7890, Santa Clara, CA, USA) equipped with an auto-sampler and Atlantis C 18 columns (5 µm, 2.1 mm I.D. × 150 mm) was used; 50 g of the fried sample was obtained, pulverized by a food processor (Elfin2.0, Shengzheng, China) and stored frozen at −20 • C. A total of 10 µL of Foods 2023, 12, 2147 5 of 14 a 10 mg/L 13 C 3 -acrylamide internal standard working solution and 10 mL of ultrapure water were added to 2 g of pulverized samples, shaken for 30 min, and then centrifuged at 4000 r/m for 10 min using a centrifuge (Medifuge™, Carlsbad, CA, USA); the supernatant was then collected. A matrix solid-phase dispersion extraction method was used for the purification. The elution was in isocratic mode using a mixture of 0.1% v/v formic acid and methanol (99.5/0.5, v/v) as the mobile phase at a flow rate of 2 mL/min; the injection volume of the sample was 25 µL. A standard series of working solutions was injected into the LC-MS/MS system and the peak areas of the corresponding acrylamide, and its internal standard were measured. The results of the fried sweet potatoes were expressed in µg/kg. Scanning Electron Microscope The microstructure of the sample was obtained using an SEM instrument (Thermo Scientific Apreo 2C, Waltham, MA, USA) equipped with a low-Vac mode, an accelerating voltage of 10 kV, and an amplification of 500. Ten images from three different samples were analyzed for each experiment. Statistical Analysis Data were obtained from five replicates. Results are presented as the mean ± standard deviation. One-way analysis of variance (ANOVA) was used to analyze the effect of pretreatment using the IBM SPSS Statistics 26 analysis software (IBM Institute, New York, NY, USA). All statistical analyses were performed with a significance level of 0.05 using Duncan's multiple range tests. A software package, Table Curve 2D, version 5.01 (Systat Software, San Jose, CA, USA), was used to fit the curve to obtain the relevant correlation coefficients (R 2 ) and parameters. Figure 3 presents the relationship between the moisture ratios and frying time of sweet potato chips with untreated, blanching-pretreated, PEF-pretreated, and a combination of PEF + blanching-pretreated samples during frying (0-6 min). The moisture ratios of the sweet potato slices was apparently significantly affected by the various pretreatment methods. After frying for 6 min, the moisture ratios of untreated, blanching-pretreated, PEFpretreated, and combination of PEF + blanching-pretreated samples were 0.07, 0.04, 0.04, and 0.03, respectively. This is consistent with the results of Zhang et al. [20], who evaluated the effects of blanching pretreatment and PEF on the physicochemical properties of French fries. The PEF and blanching pretreatments affect the cell integrity and permeability, which directly leads to differences in the moisture ratio after frying. The development of the moisture ratio with the frying time was fitted using the empirical Henderson and Pabis equation (Equation (4)) ( Figure 3, dashed lines). The R 2 values of the untreated and pretreated samples were relatively high (R 2 = 0.980 − 0.997). The values of the frying rate constant k as a function of pretreatment ranged from 7.03 × 10 −3 s −1 to 9.71 × 10 −3 s −1 (inset of Figure 3). The results demonstrated that the combination of PEF + blanching pretreatments caused a significant increase in the frying rate constant (p < 0.05). The cell membrane electroporated by PEF can promote water migration from the core to the surface, which also increases the mass transfer during the frying process, thereby increasing the frying rate constant [26]. Similarly, blanching disrupts the plant cell walls by degrading pectin, thereby increasing cell permeability [27]. Compared to the untreated samples, the combination of PEF + blanching pretreatment increased the frying rate of the samples by 38.12% (inset of Figure 3), which demonstrates that they have a synergistic effect on water evaporation during frying. Effect of Pretreatment on the Moisture Ratio of Sample where, k is the frying rate constant, and s −1 and A are the frying coefficients. grading pectin, thereby increasing cell permeability [27]. Compared to the untreated samples, the combination of PEF + blanching pretreatment increased the frying rate of the samples by 38.12% (inset of Figure 3), which demonstrates that they have a synergistic effect on water evaporation during frying. where, k is the frying rate constant, and s −1 and A are the frying coefficients. Effect of Pretreatment on Oil Content of the Sample Deep-frying is a mass-and heat-transfer process that involves water evaporation and oil absorption [28]. Figure 4 demonstrates the development of the oil content (Of) for fried sweet potatoes that are untreated, blanched, PEF, and PEF + blanched pretreated samples; the dashed lines were obtained by fitting the data with Equation (5). The relevant correlation coefficients (R 2 ) were all above 0.902, and the parameters of the equation fit for the untreated, blanched, PEF, and PEF + blanched pretreated samples are presented in Table 1. In all cases, the oil content increased as the frying time increased. Compared to the untreated chips, the oil content of the sweet potato chips significantly decreased by 33.38%, 31.90%, and 40.33% with blanching, PEF, and the combination of PEF + blanching pretreatments, respectively. This can be explained by the higher frying rate with PEF pretreatment (Figure 3), which forms a crust on the surface of the sweet potato chips, thereby reducing the oil absorption during frying [15,17,29]. PEF may also cause more cytoplasm to flow out of the cell, forming a water vapor barrier layer on the surface and ultimately reducing oil absorption [30]. In addition, the smoother tissue surface of the samples resulting from the PEF treatment may lead to less oil adhesion after frying due to oil content reduction [31,32]. Similarly, starch gelatinization occurred during the blanching pretreatment, which prevents more oil penetration during the frying process compared to the untreated samples [8,16]. Zhang et al. [26] found that the combination of PEF + blanching pretreatment decreased the oil content of the French fries by 13.8%. Liu et al. [2] investigated the physical-chemical properties of fried sweet potato tubers with PEF pretreatment Effect of Pretreatment on Oil Content of the Sample Deep-frying is a mass-and heat-transfer process that involves water evaporation and oil absorption [28]. Figure 4 demonstrates the development of the oil content (O f ) for fried sweet potatoes that are untreated, blanched, PEF, and PEF + blanched pretreated samples; the dashed lines were obtained by fitting the data with Equation (5). The relevant correlation coefficients (R 2 ) were all above 0.902, and the parameters of the equation fit for the untreated, blanched, PEF, and PEF + blanched pretreated samples are presented in Table 1. In all cases, the oil content increased as the frying time increased. Compared to the untreated chips, the oil content of the sweet potato chips significantly decreased by 33.38%, 31.90%, and 40.33% with blanching, PEF, and the combination of PEF + blanching pretreatments, respectively. This can be explained by the higher frying rate with PEF pretreatment (Figure 3), which forms a crust on the surface of the sweet potato chips, thereby reducing the oil absorption during frying [15,17,29]. PEF may also cause more cytoplasm to flow out of the cell, forming a water vapor barrier layer on the surface and ultimately reducing oil absorption [30]. In addition, the smoother tissue surface of the samples resulting from the PEF treatment may lead to less oil adhesion after frying due to oil content reduction [31,32]. Similarly, starch gelatinization occurred during the blanching pretreatment, which prevents more oil penetration during the frying process compared to the untreated samples [8,16]. Zhang et al. [26] found that the combination of PEF + blanching pretreatment decreased the oil content of the French fries by 13.8%. Liu et al. [2] investigated the physical-chemical properties of fried sweet potato tubers with PEF pretreatment and found that the oil content decreased by 18.3% with a PEF pretreatment at 1.2 kV/cm and frying temperature of 190 • C. where, a and b indicate the constants of the models. Effect of Pretreatment on the Total Color Change of Sample Color is the basic characteristic used to evaluate the quality and acceptance of fried food, which affects the consumers' choice of products [33]. Choi et al. demonstrated that ΔE > 2 indicates that the color of the sample changed compared to the raw material [34]. The tendency curves of the total color change with frying time were fitted using the linear equation in Equation (6) (dashed lines) ( Figure 5). The relevant correlation coefficients were R 2 0.96 ( Table 2). The linear equation appears to precisely describe the obtained data for the total color change value. The color of the samples gradually changed from orange to brown during frying. The apparent change (ΔE) was indicated by values increasing from 14.16 to 22.98 within a frying time of 1-6 min for the untreated samples ( Figure 5). After frying, the total color change ΔE of the blanched, PEF, and PEF + blanched pretreated samples were 19.26, 21.57, and 20.34, respectively ( Figure 5). The color change in the samples was mainly due to the occurrence of the Maillard reaction during frying. Moreover, the degree of browning depends on the amount of reducing sugars and amino acids on the surface of the samples [35]. Blanching resulted in the leaching of the reducing Effect of Pretreatment on the Total Color Change of Sample Color is the basic characteristic used to evaluate the quality and acceptance of fried food, which affects the consumers' choice of products [33]. Choi et al. demonstrated that ∆E > 2 indicates that the color of the sample changed compared to the raw material [34]. The tendency curves of the total color change with frying time were fitted using the linear equation in Equation (6) (dashed lines) ( Figure 5). The relevant correlation coefficients were R 2 ≥ 0.96 ( Table 2). The linear equation appears to precisely describe the obtained data for the total color change value. The color of the samples gradually changed from orange to brown during frying. The apparent change (∆E) was indicated by values increasing from 14.16 to 22.98 within a frying time of 1-6 min for the untreated samples ( Figure 5). After frying, the total color change ∆E of the blanched, PEF, and PEF + blanched pretreated samples were 19.26, 21.57, and 20.34, respectively ( Figure 5). The color change in the samples was mainly due to the occurrence of the Maillard reaction during frying. Moreover, the degree of browning depends on the amount of reducing sugars and amino acids on the surface of the samples [35]. Blanching resulted in the leaching of the reducing sugars and amino acids into the solution, which decreased the Maillard reaction during frying, leading to a brighter color. Similarly, PEF can improve the permeability of cells and enable the leaching of reducing sugars and amino acids. However, certain reducing sugars and amino acids may remain on the surface of the samples, resulting in the color of the PEF-pretreated samples not being as bright as that of the blanched-pretreated samples; the color was darker at the edge of the samples. The total color change ∆E of the combined PEF + blanching- pretreatment was lower than that of the PEF-pretreated sample but higher than the blanched pretreated sample. This finding does not agree with the results reported by Zhang et al. [20], who found that the combined pretreatment of PEF + blanching significantly reduced the browning degree of French fries during frying. This may be because the trend in the total color change was not consistent for the various types of potatoes (regular potato and sweet potato). Here, a and b indicate the constants of the models. of potatoes (regular potato and sweet potato). ∆ = Here, a and b indicate the constants of the models. Effect of Pretreatment on the Hardness of Sample Textural characteristics during frying are indicators of the development of h mass transfer processes [36]. Figure 6 presents the hardness versus frying time untreated, blanching-pretreated, PEF-pretreated, and PEF + blanching-pretreat ples. Before frying, the hardness values of the pretreated samples (PEF, blanch PEF + blanching) were significantly lower than those of the untreated samples (F The initial softening of tissues in the PEF-pretreated samples originated from an in the cell membrane permeability and cell breakdown [37]. A previous study r Effect of Pretreatment on the Hardness of Sample Textural characteristics during frying are indicators of the development of heat and mass transfer processes [36]. Figure 6 presents the hardness versus frying time for the untreated, blanching-pretreated, PEF-pretreated, and PEF + blanching-pretreated samples. Before frying, the hardness values of the pretreated samples (PEF, blanching, and PEF + blanching) were significantly lower than those of the untreated samples ( Figure 6). The initial softening of tissues in the PEF-pretreated samples originated from an increase in the cell membrane permeability and cell breakdown [37]. A previous study reported that blanching pretreatment induces lamellar media solubilization and starch gelatinization as a result of tissue softening [38]. A similar softening effect on potato tissues following PEF + blanching was reported by Zhang et al. [20]. For all fried sweet potato chips (untreated and pretreated), the hardness values first decreased (t < 120 s) and then increased with frying time t > 120 s. This result agrees with previous studies regarding the textural properties of fried potatoes [15,29]. Note, the hardness of the blanching, PEF, and PEF + blanching-pretreated samples increased by 14.5%, 10.58%, and 19.92%, respectively, compared to those of the untreated samples at the end of frying (360 s). The final hardening of the samples reflects the formation of a surface crust during frying. Accordingly, the PEF and blanching pretreatment promoted water loss (Figure 3) in the sweet potato chips and reduced the adhesion between the cells, resulting in an increase in the hardness of the sweet potato chips after frying. Moreover, the formation of denser skin on the surface of the fried sample may restrict oil immersion while frying the sweet potato tissue (Figure 4). that blanching pretreatment induces lamellar media solubilization and starch gelatiniz tion as a result of tissue softening [38]. A similar softening effect on potato tissues follow ing PEF + blanching was reported by Zhang et al. [20]. For all fried sweet potato chi (untreated and pretreated), the hardness values first decreased (t < 120 s) and then i creased with frying time t > 120 s. This result agrees with previous studies regarding t textural properties of fried potatoes [15,29]. Note, the hardness of the blanching, PEF, an PEF + blanching-pretreated samples increased by 14.5%, 10.58%, and 19.92%, respectivel compared to those of the untreated samples at the end of frying (360 s). The final harde ing of the samples reflects the formation of a surface crust during frying. Accordingly, t PEF and blanching pretreatment promoted water loss (Figure 3) in the sweet potato chi and reduced the adhesion between the cells, resulting in an increase in the hardness of t sweet potato chips after frying. Moreover, the formation of denser skin on the surface the fried sample may restrict oil immersion while frying the sweet potato tissue (Figu 4). Effect of Pretreatment on the Acrylamide Content of the Sample In 2002, the Swedish National Food Administration found that carcinogen acryl mide (AA) was formed in heated starch-based foods [39]. Pedreschi et al. reported th toxic AA is a byproduct of the Maillard reaction of reducing sugars and amino acids du ing thermal processing [40]. The AA content in the fried sweet potato chips versus t pretreated chips is shown in Figure 7, which demonstrates that pretreatment (PEF, blanc ing, and PEF + blanching) significantly decreased the AA content in the fried samples b nearly 46.10% compared to the untreated samples. This can be explained by the increa in the frying rate with the pretreatment (less frying time, Figure 3); the leaching of redu ing sugars and amino acids from the sweet potato slices by PEF and blanching can al decrease the acrylamide content during frying [41]. The Maillard reaction during frying related to the degree of browning of the sample. The PEF + blanching pretreatment sign icantly decreased the color change of the sample ( Figure 5) compared to the PEF-pr treated sample; however, there was no significant difference between the pretreatmen for AA formation in frying sweet potato chips (Figure 7). Therefore, the formation of A during frying is not only linked to color change but also to the reaction substrate, fryin Effect of Pretreatment on the Acrylamide Content of the Sample In 2002, the Swedish National Food Administration found that carcinogen acrylamide (AA) was formed in heated starch-based foods [39]. Pedreschi et al. reported that toxic AA is a byproduct of the Maillard reaction of reducing sugars and amino acids during thermal processing [40]. The AA content in the fried sweet potato chips versus the pretreated chips is shown in Figure 7, which demonstrates that pretreatment (PEF, blanching, and PEF + blanching) significantly decreased the AA content in the fried samples by nearly 46.10% compared to the untreated samples. This can be explained by the increase in the frying rate with the pretreatment (less frying time, Figure 3); the leaching of reducing sugars and amino acids from the sweet potato slices by PEF and blanching can also decrease the acrylamide content during frying [41]. The Maillard reaction during frying is related to the degree of browning of the sample. The PEF + blanching pretreatment significantly decreased the color change of the sample ( Figure 5) compared to the PEF-pretreated sample; however, there was no significant difference between the pretreatments for AA formation in frying sweet potato chips (Figure 7). Therefore, the formation of AA during frying is not only linked to color change but also to the reaction substrate, frying temperature, and frying time [42,43]. Liyanage et al. [44] demonstrated that an increase in temperature from 160 to 190 • C decreased the AA content by approximately 90%. They also found that the acrylamide formation in cultivars of Atlantic, Snowden, and Vigor pretreated by blanching in distilled water decreased by 19-59%. Genovese et al. reported that the AA reduction for potato chips pretreated by PEF (1.5 kV cm −1 , 10 ms, 100 Hz) was 30%, whereas it was 17% for the hot water blanching pretreatment (85 • C, 3.5 min) [45]. Foods 2023, 12, x FOR PEER REVIEW 1 temperature, and frying time [42,43]. Liyanage et al. [44] demonstrated that an incr temperature from 160 to 190 °C decreased the AA content by approximately 90% also found that the acrylamide formation in cultivars of Atlantic, Snowden, and Vig treated by blanching in distilled water decreased by 19-59%. Genovese et al. reporte the AA reduction for potato chips pretreated by PEF (1.5 kV cm −1 , 10 ms, 100 Hz) wa whereas it was 17% for the hot water blanching pretreatment (85 °C, 3.5 min) [45]. (Figure 8a). Neverthele blanching pretreatment resulted in significant starch gelatinization and a smoother ture (Figure 8b). Due to electroporation, samples pretreated with PEF demonstrated on the cell wall of the sweet potato (Figure 8c), which is consistent with the results present studies [2,46]. As shown in Figure 7d, in samples pretreated by the combi of PEF + blanching, cracks were found on the sweet potato wall in addition to holes, promotes the release of water from the surface at a higher rate during frying ( Fig After deep-frying, the cells did not exhibit a stereoscopic morphology, and the cel were disrupted and no longer upright, which is in agreement with the results of Zh al. [47]. In the untreated and pretreated fried samples, starch granules were no found and were swollen, gelatinized, and dehydrated during frying [46]. The PE treatment increased the internal porosity (Figure 8g), which allowed the rapid ev tion of water during frying and prevented oil absorption. Furthermore, the internal ture of sweet potato chips obtained via the PEF + blanching pretreatment was sm and flatter, thereby decreasing the oil absorption during frying processing (Figure (Figure 8a). Nevertheless, the blanching pretreatment resulted in significant starch gelatinization and a smoother structure (Figure 8b). Due to electroporation, samples pretreated with PEF demonstrated pores on the cell wall of the sweet potato (Figure 8c), which is consistent with the results of the present studies [2,46]. As shown in Figure 8d, in samples pretreated by the combination of PEF + blanching, cracks were found on the sweet potato wall in addition to holes, which promotes the release of water from the surface at a higher rate during frying (Figure 3). After deep-frying, the cells did not exhibit a stereoscopic morphology, and the cell walls were disrupted and no longer upright, which is in agreement with the results of Zhang et al. [47]. In the untreated and pretreated fried samples, starch granules were no longer found and were swollen, gelatinized, and dehydrated during frying [46]. The PEF pretreatment increased the internal porosity (Figure 8g), which allowed the rapid evaporation of water during frying and prevented oil absorption. Furthermore, the internal structure of sweet potato chips obtained via the PEF + blanching pretreatment was smoother and flatter, thereby decreasing the oil absorption during frying processing (Figure 4). Summary Compared to the untreated samples, PEF, blanching, and PEF + blanching pretreatments reduced the final moisture ratio, oil content, acrylamide content, and total color change of the fried sweet potato chips. Furthermore, the combined PEF + blanching pretreatment significantly improved the quality of fried sweet chips when compared to the single or untreated methods. Note, the hardness of the blanching, PEF, and PEF + blanching-pretreated samples increased by 14.5%, 10.58%, and 19.92%, respectively, reflecting the formation of a surface crust during frying, and ultimately decreasing the oil content Summary Compared to the untreated samples, PEF, blanching, and PEF + blanching pretreatments reduced the final moisture ratio, oil content, acrylamide content, and total color change of the fried sweet potato chips. Furthermore, the combined PEF + blanching pretreatment significantly improved the quality of fried sweet chips when compared to the single or untreated methods. Note, the hardness of the blanching, PEF, and PEF + blanchingpretreated samples increased by 14.5%, 10.58%, and 19.92%, respectively, reflecting the formation of a surface crust during frying, and ultimately decreasing the oil content of the sample. Finally, chips pretreated by PEF + blanching had a lower oil (0.37 g/g DM) and
v3-fos-license
2014-10-01T00:00:00.000Z
2012-02-08T00:00:00.000
770386
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/ecam/2012/741925.pdf", "pdf_hash": "37c3fbdcc161bcce1844d662cfa912129dbd7a9a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1394", "s2fieldsofstudy": [ "Biology" ], "sha1": "3e52f2fdd7a041f1476b036b74ec4d8403bcc9ba", "year": 2012 }
pes2o/s2orc
Effects of Brugmansia arborea Extract and Its Secondary Metabolites on Morphine Tolerance and Dependence in Mice The aim of the present study was to investigate, in vivo, the effect of a Brugmansia arborea extract (BRU), chromatographic fractions (FA and FNA), and isolated alkaloids on the expression and the acquisition of morphine tolerance and dependence. Substances were acutely (for expression) or repeatedly (for acquisition) administered in mice treated with morphine twice daily for 5 or 6 days, in order to make them tolerant or dependent. Morphine tolerance was assessed using the tail-flick test at 1st and 5th days. Morphine dependence was evaluated through the manifestation of withdrawal symptoms induced by naloxone injection at 6th day. Results showed that BRU significantly reduced the expression of morphine tolerance, while it was ineffective to modulate its acquisition. Chromatographic fractions and pure alkaloids failed to reduce morphine tolerance. Conversely BRU, FA, and pure alkaloids administrations significantly attenuated both development and expression of morphine dependence. These data suggest that Brugmansia arborea Lagerh might have human therapeutic potential for treatment of opioid addiction. Introduction Brugmansia arborea (L.) Lagerh. is a solanaceous shrub native to South America and widely cultivated in Europe as an ornamental species. In Peru this plant, known with the vernacular names of campachu or misha, is employed by the shamans in magic and sorcery to get in touch with the gods, as an antinflammatory and in the treatment of rheumatic pains. Misha is one of the most powerful magical plants, a "hot" species, known to act on the central nervous system [1]. Previous phytochemical studies identified some active components of the plant. The tropane alkaloid, hyoscine, was found in samples collected in Argentina [2]. Other tropane alkaloids have been found in plants collected in Italy [3]. Few pharmacological studies are available in the literature about this plant. Extracts, chromatographic fractions, and pure alkaloids showed inhibitory activity on contraction of isolated guinea pig ileum induced both electrically and by acetylcholine, showing a spasmolytic activity in vitro [3]. B. arborea extracts, chromatographic fractions, and pure isolated compounds have been reported for their activity on CNS [4]. In vitro studies demonstrated the affinity of methanol and water extracts of the plant for 5-HT1A 5-HT2A, 5-HT2C, D1, D2, α1, and α2 receptors in binding assays [5,6]. These biological systems are widely involved in the phenomenon of dependence. Particularly opioid addiction is certainly among the most widely diffused with a high rate of mortality. Therefore, considering that B. arborea extracts and pure alkaloids are able to reduce the morphine withdrawal in vitro [7], the aim of the present study was to investigate the possible activity of a methanol extract, its chromatographic fractions, and three pure isolated compounds from B. arborea on the expression and the acquisition of morphine tolerance and dependence in mice. Extraction, Separation, and Identification. One kilogram of leaves and flowers of B. arborea were oven-dried at 40 • C and powdered. The powder was extracted at room temperature with methanol for two days. The extract was concentrated in vacuo, giving 32 g of residue. This crude extract was called BRU. An aliquot of 3.4 g of this extract was purified on a Sephadex LH 20 column, eluting with MeOH. One hundred fifteen fractions were obtained and combined into 15 major fractions, on the basis of their chemical similarity as revealed by thin-layer chromatography (TLC). Fractions 3 and 4, containing alkaloids, were purified by RF-HPLC. Fraction 3 was purified using a C 18 μ-Bondapak column, with the following conditions: flow rate 2.0 mL/min, eluent MeOH : H 2 O, 7 : 3; from this fraction pure apoatropine (4.7 mg) was obtained. From fraction 4, purified by RF-HPL, using a C 18 μ-Bondapak column, with the following conditions: flow rate 2.0 mL/min, eluent MeOH : H 2 O, 6 : 4, atropine (19 mg) and 3α-tigloyloxitropane (12.3 mg) were obtained. Pure compounds were identified by accurate NMR analyses and comparison of their spectral data with data available in literature [8,9]. The crude extract (BRU), the fraction containing alkaloids (FA), the fraction not containing alkaloids (FNA), and the pure compound (atropine, apoatropine, 3α-tigloyl-oxitropane) were texted on mice to study the possible interaction whit morphine. Animals. Male CD-1 mice (Harlan SRC, Milan, Italy) weighing 25 g to 35 g were used. These mice were kept in a dedicated room, with a 12 : 12 h light/dark cycle (lights on at 09:00), a temperature of 20 • C to 22 • C, and a humidity of 45% to 55%. They were provided with free access to tap water and food pellets (4RF18, Mucedola, Settimo Milanese, Italy). Each mouse was used in only one experimental session. Ethical guidelines for the investigation of experimental pain in conscious animals were followed, and procedures were carried out according to EEC Drugs. BRU is the denomination of crude methanol extract not purified. This extract (BRU: 7.5, 15, and 30 mg/kg/5 mL), its alkaloidal and nonalkaloidal fractions (FA and FNA at 5.5 and 24,5 mg/kg, resp.) and single pure compounds (atropine, apoatropine, and 3α-tigloyloxitropane, at 2.2, 1.8, and 1.5 mg/kg, resp.) were dissolved in water immediately before use and injected intraperitoneally (i.p.). The same vehicle was administered to control group. Morphine hydrochloride (Salars S.p.A, Como, Italy) was dissolved in saline immediately before use and was administered subcutaneously (s.c.) at a dose of 10 mg/kg. Naloxone hydrochloride (Sigma, St. Louis, MO), dissolved in water immediately before use, was administered i.p. at dose of 5 mg/kg. Induction of Morphine Tolerance. According to Abdel-Zaher et al. [10] and Mattioli and Perfumi [11], morphine tolerance was induced in mice treated with the administration of morphine (10 mg/kg; s.c.) twice daily at 12 h intervals for 5 days. Tolerance was evaluated by testing the antinociceptive response to morphine on tail flick test on the 5th day, 30 min after the last morphine injection, in comparison with the 1th day. Briefly, tail flick test consists of the irradiation of the lower third of the tail with an I. R. source (Ugo Basile, Comerio, Italy). The basal predrug latency, ranged between 2-3 s, was calculated as the mean of two trials performed at 30 min interval. Then mice received tested-compound or related vehicle, 30 min before morphine or saline administration. The antinociceptive activity was evaluated, on the 5th day, 30 min after the last morphine injection. A cut-off latency of 12 s was established to minimize tissue damage. Antinociceptive effect was expressed as a percent of the Maximum Possible Effect (MPE%) according to the following formula: Post-drug latency-baseline latency cut-off value-baseline latency × 100, where postdrug latency is tail-flick latency 30 min after the last morphine dose. Effects of BRU on the Expression and the Acquisition of Analgesic Tolerance to Morphine. In order to evaluate the effect of BRU on the acquisition (development) of morphine tolerance, the mice (n = 8-12 for each group) were i.p. administered with BRU (7.5, 15 and 30 mg/kg), twice daily, 30 min before each morphine treatment. Its effects on the expression phases of morphine tolerance were evaluated in morphine-treated mice receiving acute administration of BRU, 30 min prior to the last morphine injection on test day (day 5). The effects of BRU on the development and the expression of morphine tolerance were evaluated by testing the analgesic effect of morphine on tail-flick test as described in detail. In addition, the antinociceptive effect of BRU alone on antinociception was examined, by tail-flick test, in nondependent mice that received single or repeated i.p. administration of different doses of BRU (7.5, 15, and 30 mg/kg). Effects of FA and FNA on the Expression and the Acquisition of Analgesic Tolerance to Morphine. In order to evaluate which fraction of the total extract was responsible of the effect of BRU, the effects of both FA and FNA were tested on the expression and acquisition of analgesic tolerance to morphine. Therefore mice (n = 8-10 for each group) received acute or repeated administration of FA (5.5 mg/kg), FNA (24.5 mg/kg), or related vehicle following the experimental procedure reported above. The doses of chromatographic fractions were chosen on their percentage on 30 mg/kg of BRU. Effects of Single Alkaloids on the Expression and the Acquisition of Analgesic Tolerance to Morphine. The three Evidence-Based Complementary and Alternative Medicine 3 alkaloids, isolated from the alkaloid-containing fraction (FA), were also tested on the expression and acquisition of analgesic tolerance to morphine. For this purpose, mice (n = 10 for each group) were administered with atropine, apoatropine, 3α-tigloyl-oxitropane at doses of 2.2, 1.8, and 1.5 mg/kg, respectively, as reported above. The dose of each alkaloid was chosen on their percentage on 5.5 mg/kg of FA. Induction of Morphine Dependence. To develop dependence, the mice were treated with morphine (10 mg/kg; s.c.) twice daily at 12 h intervals for 6 days [10,12]. Two hours after the last dose of morphine, withdrawal syndrome (abstinence) as an index of morphine dependence was precipitated by intraperitoneal injection of naloxone (5 mg/kg) [10,12]. The combination of morphine with a high dose of naloxone on day 6 has been demonstrated to induce more severe symptoms, including autonomic signs, since naloxone precipitates dose-dependent withdrawal symptoms in animals acutely or chronically dependent upon morphine [12]. Ten minutes before naloxone treatment, the mice were placed in a transparent acrylic cylinder (20 cm diameter, 35 cm high) to habituate them to the new environment. Immediately after naloxone challenge, each mouse was again placed gently in the cylinder and then monitored for 15 min for the occurrence of withdrawal signs (jumping, rearing, forepaw tremor, and teeth chatter). The withdrawal symptoms are reported as a summary of all of the signs that were seen. Effects of BRU on the Expression and the Acquisition of Morphine Dependence. To examine the effects of total extract on morphine dependence, BRU (7.5, 15, and 30 mg/kg) was given in mice (n = 8-12 for each group) i.p. chronically 30 min prior to each morphine injection (acquisition) or acutely before naloxone (expression) as described above. In addition, the effect of BRU alone on naloxone-induced jumping behaviour was examined in nondependent mice. Animals received single or repeated administration of different doses of BRU (7.5, 15, and 30 mg/kg; i.p.) or vehicle (5 mL/kg; i.p.). The assessment of naloxone-precipitated jumping behaviour after administration of BRU was already described in detail. Effects of FA and FNA on the Expression and the Acquisition of Morphine Dependence. To examine the effects of the two fractions of the total extract on the expression and the acquisition of morphine dependence, FA (5.5 mg/kg) and FNA (24.5 mg/kg) were tested on naloxone-precipitated withdrawal syndrome behaviour as reported above (n = 8-10 animals for each group). Effects of Single Alkaloids on the Expression and Acquisition of Morphine Dependence. Finally single isolated alkaloids (atropine, apoatropine, and 3α-tigloyl-oxitropane) were administered at the dose of 2.2, 1.8, and 1.5 mg/kg, respectively, based on their percentage on the alkaloidal fraction FA, in order to evaluate their effect on the expression and development of morphine dependence (n = 10 for each group). Experimental procedure was the same reported above. Statistical Analysis. The statistical analysis was performed using two-way split-plot analysis of variance (ANOVA), with treatment as the between-subject factor, and time as the within-subject factor, to analyze morphine tolerance. The morphine dependence data were analyzed by one-way analysis of variance (ANOVA). When appropriate, post hoc analysis was carried out using a Newman-Keuls test, to determine the differences between groups. Statistical significance was set at P < 0.05, and the data are expressed as means ± S.E.M. Results The bioassay-oriented study of a methanol extract of Brugmansia arborea permitted the isolation of three tropane alkaloids: atropine, apoatropine, and 3α-tigloil-oxitropane. This result agrees with the available literature that reports that the genus Brugmansia contains this class of alkaloids [3]. (Figure 1(a)). Acute administration of BRU (7.5, 15, and 30 mg/kg) 30 min before morphine injection on the test day produced a significant decrease in the expression of morphine tolerance as compared to morphine-vehicle group. Particularly, the post hoc analysis revealed a statistically significant effect at the highest dose of 30 mg/kg (P < 0.001) (Figure 1(a)). Effects of BRU on the Expression and the Acquisition of Acute administration of BRU to nondependent mice (control) did not modify the analgesia latency of the treated mice (P > 0.05) (data not showed). Concerning the effect of BRU on the acquisition of analgesic tolerance to morphine, overall analysis of variance revealed a statistically significant treatment and time effect [F(7, 74) = 22.523, P < 0.001; F(1, 7) = 52.156, P < 0.01, resp.], and interaction time × treatment effects were seen on analgesia latency [F(7, 74) = 10.253, P < 0.001] (Figure 1(b)). The mice treated with morphine showed a maximal antinociceptive effect (%MPE) on day 1 [F(7, 74) = 17.783; P < 0.001]. Post hoc analysis revealed that acute coadministration of BRU with morphine did not modulate the analgesia at any of the doses tested on day 1 compared to vehiclemorphine group (P > 0.05). Repeated s.c. administration of 10 mg/kg morphine alone twice daily to the mice induced, on day 5, a significant decrease in the analgesia latency in the tail-flick test, which resulted in a 70% reduction Mice were treated twice daily for 5 days with either saline or 10 mg/kg morphine. BRU (0, 7.5, 15 and 30 mg/kg) was administered 30 min before each morphine injection (acquisition) or prior to the last morphine treatment (expression). Morphine antinociceptive effect (%MPE) was assessed on day 1 and day 5, as indicated. Significant differences: * * P < 0.01, compared to control group; ++ P < 0.01, compared to related-morphine group on day 1; •• P < 0.01, compared to morphine group on day 5. [F(1, 18) = 41.204; P < 0.001] (Figure 1(b)). Pretreatment of the mice with 7.5, 15, and 30 mg/kg BRU 30 min before each morphine injection did not inhibit the development of tolerance to morphine analgesia at any of the doses tested (P > 0.05) (Figure 1(b)). Finally, repeated administration of BRU to nondependent mice (control) did not modify the analgesia latency of the treated mice (P > 0.05) (Figure 1(b)). Figure 2: Effects of FA and FNA on the expression (a) and the acquisition (b) of tolerance to morphine-induced analgesia. Mice were treated twice daily for 5 days with either saline or 10 mg/kg morphine. FA (5.5 mg/kg) and FNA (24.5 mg/kg) were administered 30 min before each morphine injection (acquisition) or prior to the last morphine treatment (expression). Morphine antinociceptive effect (%MPE) was assessed on day 1 and day 5, as indicated. Significant differences: * * P < 0.01, compared to control group; ++ P < 0.01, compared to relatedmorphine group on day 1. Figure 2, both single (Panel (a)) and repeated (Panel (b)) administration of FA (5.5 mg/kg) and FNA (24.5 mg/kg), 30 min before morphine injection on the test day, did not produce significant prevention or treatment of the morphine tolerance (P > 0.05). In fact FA and FNA did not affect morphine-induced antinociception elicited by single or repeated dose injection, as confirmed by statistical analysis (P > 0.05). Figure 3 shows the effects of atropine, apoatropine, and 3α-tigloyloxitropane on the expression (panel (a)) and the acquisition (Panel (b)) of analgesic tolerance to morphine. Two-way ANOVA followed by post hoc analysis revealed that there is no significant difference in MPE% of morphine among alkaloids-treated groups and vehicle-treated mice both in the expression [F(7, 64) = 45.963, P < 0.001] and the acquisition [F(7, 64) = 36.520, P < 0.001] test. Finally, repeated administration of atropine, apoatropine, and 3αtigloyl-oxitropane to nondependent mice (control) did not modify the analgesia latency of the treated mice (P > 0.05) (Figure 3(b)). Dependence. Figures 4(a) and 4(b) show the Mice were treated twice daily for 5 days with saline or 10 mg/kg morphine treatment. On the sixth day, withdrawal syndromes were precipitated by injection of 5 mg/kg naloxone, 2 h after the last morphine injection. BRU (0, 7.5, 15, and 30 mg/kg) was administered 30 min before each morphine injection (acquisition) or prior to naloxone injection (expression). The withdrawal symptoms are given as a summary of the frequency of these somatic signs: jumping, rearing, forepaw tremors, and teeth chatter. Significant differences: * * P < 0.01, compared to control; + P < 0.05, ++ P < 0.01, compared to morphine group. effects of BRU on the expression and the acquisition of morphine dependence, respectively. Repeated administration of morphine produced physical dependence as assessed by the summary of characteristic set of behavioural responses including jumping, rearing, forepaw tremor, and teeth chattering [F(1, 18) = 29.933; P < 0.001], following naloxone challenge. As shown in Figure 4(a), acute administration of BRU 30 min prior to naloxone injection significantly decreased the expression of morphine dependence, as assessed by the summary of the frequencies of the signs of withdrawal syndrome compared with frequencies of withdrawal manifestations of morphine-dependent mice treated with vehicle [F(7.74) = 10.518; P < 0.001]. Effects of BRU on the Expression and the Acquisition of Morphine Particularly, the post hoc analysis revealed a statistically significant and dose-dependent effect at all doses tested (7.5, 15, and 30 mg/kg) (P < 0.001) (Figure 4(a)). No difference was observed in control mice treated with all doses of BRU when compared to vehicle (P > 0.05). Figure 4(b) shows the effects of repeated administration of BRU on the development of morphine dependence. Pretreatment of the mice with BRU, 30 min before each morphine injection, attenuated the development of the characteristic signs of withdrawal, reported as total signs [F(7, 74) = 17.774; P < 0.001] (Figure 4(b)). Indeed, the post hoc analysis revealed a statistically significant effect both at the highest dose of 30 mg/kg (P < 0.01) and at the lowest BRU doses tested, of 7.5 and 15 mg/kg (P < 0.01). The mice treated only with BRU (7.5, 15 and 30 mg/kg) did not show significant difference in withdrawal symptoms compared to the control group (P > 0.05) (Figure 4(b)). Effects of FA and FNA on the Expression and the Acquisition of Morphine Dependence. Overall analysis of variance revealed a statistically significant treatment effect [expression: F(5, 54) = 12.265, P < 0.001; acquisition: F(5, 54) = 11.418, P < 0.001]. Post hoc analysis revealed that both single-dose administration of FA (5.5 mg/kg), 30 min before naloxone injection on the test day, and its repeated injection 30 min before each morphine treatment, produced significant decrease in the expression and the acquisition of morphine dependence (P < 0.01), respectively, as compared to vehicle group in morphine-treated mice (Figures 5(a) and 5(b)). In fact both acutely and repeatedly FA was able to suppress significantly the naloxone-precipitated withdrawal symptoms by about 50% in the morphine-dependent mice (P < 0.001). Conversely single or repeated administrations of FNA (24.5 mg/kg) were ineffective to prevent or treat morphine dependence in mice (P > 0.05) (Figures 5(a) and 5(b)). On the other hand, treatment with FA or FNA in nondependent mice did not affect naloxone-precipitated withdrawal symptoms elicited by single or repeated injection as compared to the control group (P > 0.05) ( Figure 5). Figure 6(a), atropine, apoatropine, and 3α-tigloyl-oxitropane (2.2, 1.8, and 1.5 mg/kg resp., i.p.) administered 30 min prior to were administered 30 min before each morphine injection twice daily for 5 days (acquisition) or prior to naloxone injection (expression). On the sixth day, withdrawal syndromes were precipitated by injection of 5 mg/kg naloxone, 2 h after the last morphine injection. The withdrawal symptoms are given as a summary of the frequency of these somatic signs: jumping, rearing, forepaw tremors, and teeth chatter. Significant differences: * * P < 0.01, compared to control; ++ P < 0.01, compared to morphine group. Effects of Single Alkaloids on the Expression and the Acquisition of Morphine Dependence. As shown in naloxone injection inhibited about 50% naloxone-precipitated withdrawal symptoms compared with those of morphine control group [F(7, 56) = 31.552, P < 0.001]. Overall analysis of variance revealed that repeated coadministration of atropine, apoatropine, and 3α-tigloyl-oxitropane with morphine decreased significantly the frequencies of the signs of withdrawal syndrome compared with frequencies of withdrawal manifestations of morphine-dependent mice treated with vehicle [F(7, 56) = 25.192, P < 0.001]. Indeed, the post hoc analysis revealed a statistically significant effect for all alkaloids tested (P < 0.01) ( Figure 6(b)). The mice acutely or repeatedly treated only with atropine, apoatropine, and 3α-tigloyl-oxitropane did not show significant difference in withdrawal symptoms compared to control animals (P > 0.05) (Figures 6(a) and 6(b)). Discussion The present study has revealed the presence of three tropane alkaloids in the methanol extract of Brugmansia arborea. These compounds are usual in the genus [13], and no significant chemical differences have been found with the same species growing in America. Moreover it valuated the effects of BRU, a methanol extract of B. arborea, its alkaloidal and nonalkaloidal chromatographic fractions, or isolated alkaloids on both the acquisition and the expression of morphine tolerance and physical dependence in mice. The data demonstrate that administration of BRU at high dose attenuates the expression of morphine tolerance by increasing the antinociceptive response, but does not attenuate its acquisition. On the other hand, neither alkaloidal and nonalkaloidal fractions, nor pure alkaloids have been proven effective in preventing or countering the tolerance to the analgesic effect of morphine. Conversely, administration of BRU attenuates both the expression and the development of morphine dependence by reducing the naloxone-induced behavioural and vegetative withdrawal symptoms in a dose-dependent manner. Indeed, BRU was effective in reducing the incidence of withdrawal symptoms in morphine-dependent mice. These data assume greater importance when it is considered that, historically, withdrawal symptoms were believed to have a major role in the relapse to drug-taking behaviour after drug abstinence [14]. BRU effects appear to be due to its alkaloidal fraction, as demonstrated by the effectiveness of single alkaloids in preventing and countering the onset of withdrawal symptoms induced by naloxone injection. On the other hand, nonalkaloidal fraction was ineffective in reducing the incidence of withdrawal symptoms in morphine-dependent mice. The results of this study support and extend previous findings that central cholinergic receptors participate in the expression and acquisition of opiate withdrawal symptoms [15]. On the other hand, central cholinergic neurons have long been suggested to mediate many of the signs and symptoms of opiate withdrawal [15]. In fact pharmacological blockade of central muscarinic receptors by antagonists, such as scopolamine, has been widely used in treatment of drug abuse, especially in opioid addiction [16,17]. Moreover it was also demonstrated the role of cholinergic system on morphine reward properties. Indeed blockade of muscarinic receptors by different doses of atropine into basolateral amygdala abolished morphineinduced place preference in rats [18]. The molecular and neurobiological mechanisms underlying the attenuation of morphine dependence by alkaloids Figure 6: Effects of single alkaloids on the expression (a) and the acquisition (b) of morphine dependence. Atropine, apoatropine, and 3α-tigloyloxitropane were administered i.p. at the dose of 2.2, 1.8, and 1.5 mg/kg, respectively, 30 min before each morphine injection (acquisition) or prior to naloxone injection (expression) (n = 10 animals for each group). The withdrawal symptoms are given as a summary of the frequency of these somatic signs: jumping, rearing, forepaw tremors, and teeth chatter. Significant differences: * * P < 0.01, compared to control; ++ P < 0.01, compared to morphine group. present in B. arborea could be related to their direct blockade of muscarinic cholinergic receptors, which have been shown to interact with opioid receptor signalling [19]. Another possibility is that B. arborea attenuation of morphine dependence is due to its indirect effects on the mesocorticolimbic dopaminergic pathway. The decreased dopaminergic activity in the VTA induced by morphine withdrawal increases the activity of noradrenalinergic and cholinergic systems, which are mainly involved in morphine withdrawal symptoms. Recent researches suggest a possible role for M5 receptors for the treatment of opiate addiction, based on M5 AChR brain region localization and involvement in the regulation of striatal dopamine release and in rewarding brain stimulation [20]. Moreover it was demonstrated that VTA ACh levels played a causal role on drug seeking and reward, and these effects were strongly attenuated by local infusion of a muscarinic antagonist, like atropine [21,22]. Therefore B. arborea alkaloids, based on their tropanic structure, might prevent and counter morphine dependence by the reduction of ACh levels on VTA brain area. Finally, DA system plays a critical role in drug craving and relapse, conditions that occur with dependence and withdrawal. Administration of D1 and D2 receptors agonists attenuated somatic withdrawal signs [23][24][25]. Therefore B. arborea extract might attenuate morphine dependence by acting directly on the mesocorticolimbic dopaminergic pathway, since it has been demonstrated the affinity of the extract for D1 and D2 receptors [5]. In conclusion, the positive effects of B. arborea extract and its pure alkaloids in the expression and development of morphine dependence encourage the use of the plant for the treatment of opioid addiction.
v3-fos-license
2023-02-17T15:06:00.404Z
2017-07-20T00:00:00.000
256914252
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1038/s41598-017-06085-3", "pdf_hash": "7f891f1b4486797d1937a411d7a7aea76dc265d2", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1396", "s2fieldsofstudy": [ "Biology" ], "sha1": "7f891f1b4486797d1937a411d7a7aea76dc265d2", "year": 2017 }
pes2o/s2orc
A genomic glance through the fog of plasticity and diversification in Pocillopora Scleractinian corals of the genus Pocillopora (Lamarck, 1816) are notoriously difficult to identify morphologically with considerable debate on the degree to which phenotypic plasticity, introgressive hybridization and incomplete lineage sorting obscure well-defined taxonomic lineages. Here, we used RAD-seq to resolve the phylogenetic relationships among seven species of Pocillopora represented by 15 coral holobiont metagenomic libraries. We found strong concordance between the coral holobiont datasets, reads that mapped to the Pocillopora damicornis (Linnaeus, 1758) transcriptome, nearly complete mitochondrial genomes, 430 unlinked high-quality SNPs shared across all Pocillopora taxa, and a conspecificity matrix of the holobiont dataset. These datasets also show strong concordance with previously published clustering of the mitochondrial clades based on the mtDNA open reading frame (ORF). We resolve seven clear monophyletic groups, with no evidence for introgressive hybridization among any but the most recently derived sister species. In contrast, ribosomal and histone datasets, which are most commonly used in coral phylogenies to date, were less informative and contradictory to these other datasets. These data indicate that extant Pocillopora species diversified from a common ancestral lineage within the last ~3 million years. Key to this evolutionary success story may be the high phenotypic plasticity exhibited by Pocillopora species. Scleractinian corals within the genus Pocillopora (Lamarck, 1816) are among the most widely distributed and abundant reef building corals, found throughout the Pacific and Indian Oceans, and the Red Sea 1, 2 . Previous classifications of the genus based on morphology have been controversial due to high levels of inter-and intraspecific colony variation 3 with more than 40 described species, of which only about 17 are generally accepted 4 . Recently, the genus has been the focus of many studies to delineate species boundaries using a variety of genetic markers 2,[5][6][7][8][9][10][11][12][13] . Of these genetic markers, the mitochondrial open reading frame (mtORF), a putative protein-coding region of unknown function 14 , has been one of the most informative, resulting in the current delimitation of five distinct mtORF clades 11 . Some of the mtORF haplotypes are highly isolated whereas others are geographically widespread 8,10,12,13,15 . Although they show some agreement with nuclear markers 6,9 , micro-skeletal morphology 11,12 , life history 16,17 , and with geography 10 , they are less concordant with gross colony morphology 12,13 , which may indicate that mtORF clades do not correspond to true species or that taxonomic relationships may be confused due to introgression 18 and/or phenotypic plasticity 11,19,20 . To add complexity, Pocillopora, like other scleractinian reef-builders, are also considered holobionts: an assemblage of species that includes the host coral animal as well as symbiotically associated dinoflagellate algae (Symbiodinium Freudenthal, 1962), bacteria, viruses, archaea and protists 21 that together form the ecological unit of a coral and are extracted along with the host genetic material. To test these hypotheses, we generated RAD-seq data 22 from 15 coral holobiont metagenomic libraries representing seven nominal species of Pocillopora. We then compared several datasets (1) mtDNA assemblies obtained by reference to the complete mitochondrial genome of P. damicornis (Linnaeus, 1758) (accession number: NC_009797 14 ); (2) histone reference assemblies identified from de-novo contigs using BLAST 23 ; (3) ribosomal contigs identified by reference to the 18 S, ITS1, 5.8 S, ITS2, and 28 S region of P. damicornis (accession number: AY722785 24 ); (4) contigs that mapped to the coral transcriptomic data of Bhattacharya et al. 25 and Traylor-Knowles et al. 26 ; (5) all loci from the complete holobiont metagenomic libraries that passed filtering; and (6) holobiont single nucleotide polymorphism (SNP) loci of high quality that were shared by all Pocillopora taxa. Our objectives were to create a rooted phylogeny for the genus Pocillopora with age estimates for each node, and to determine if there was concordance among the various datasets. The mitochondrial genome also showed high posterior support at clade nodes, yielding good support for the monophyly of each lineage (Fig. 2b). Bootstrap support in the maximum-likelihood phylogeny, however, was reduced for the three most recently diverged species (following the names proposed in the most recent formal taxonomical review of this genus by Schmidt-Roach et al. 11 ), Pocillopora verrucosa (Ellis & Solander, 1786), P. damicornis, and P. acuta (Lamarck, 1816). Samples S2 and S3 had the greatest mean coverage depth across the mitochondrial genome (135.5 and 205.8, respectively), resulting in 100% coverage of the mitochondrial genome for sample S2 (Table S2). Coverage across the mitochondrial genome was most reduced for individuals in the P. damicornis clade (SD1, SD4, and R17: 44%, 35.1%, and 37.7% respectively). Bayesian and maximum likelihood phylogenetic analysis of the histone dataset recovered a topology similar to that of the holobiont and transcriptomic data, however P. acuta was recovered as paraphyletic (Fig. S2b). Across the histone marker, individuals J295, SD2, and SD6 had the lowest percent coverage (30.3, 13.7, and 4.5%, respectively), and individuals S2,S3, and SD3 had the highest percent coverage (100, 95.3, and 96.1%, respectively) ( Table S2). The tree topology without these low coverage samples is similar to that of the complete tree, and despite the low coverage of some individuals in the histone reference, the topology matches other approaches, so we included all data in these comparisons. In contrast, posterior support for the topology recovered using the ribosomal region was generally low for the majority of nodes and the species P. acuta and P. verrucosa appear polyphyletic (Fig. S2a). Based on ribosomal genes alone, placement of individuals J001, R16, and J295 was most highly supported, whereas the placement of P53 was least supported (Fig. S2a). The positions of individuals P53, J001, and R16 based on ribosomal data (Fig. S2a) differed from that seen in the holobiont, mitochondrial, histone, and transcriptomic phylogenies (Figs 1, 2b, S2b and S1). Despite the relatively high percent coverage of the ribosomal reference for all individuals (avg. 88%, Table S2), this phylogeny ( Fig. S2a) was the least well resolved and the most inconsistent with all other approaches reported here (Fig. S2b). SNP analyses. The SNAPP results, plotted into a cloudogram represent the underlying tree topology distributions 28 , showed clear divergence between lineages and well-resolved monophyly between all mtORF clades with the exception of the two most recently diverged sister species, P. damicornis and P. acuta (Fig. 3). The phylogenetic position of individuals Pacu02 and R17, in particular, shows evidence of alternative placement with some of the loci in this analysis, which was otherwise congruent with the holobiont, transcriptome, and mitochondrial analyses described above. Alternate trees emerging from the SNAPP analysis might derive from contamination by loci other than the coral host or can be evidence of introgressive hybridization, or incomplete lineage sorting among recently derived taxa 28 . By comparing results of different analyses, we can draw inferences about the likely mechanism driving alternate tree topologies in the SNAPP cloudogram: contamination ought to be distributed at random with respect to topology, whereas incomplete lineage sorting should be proportional to the time since the most recent ancestor, and introgression should be limited to species capable of hybridizing. The n/a n/a n/a n/a P. only alternate tree topologies common enough to appear in the cloudogram involve the most recently derived sister taxa: the P. damicornis/acuta complex (Fig. 3). Conspecificity matrix. The groupings obtained from the conspecificity matrix approach 29 were congruent with the mitochondrial, holobiont and transcriptomic phylogenies, and individuals of the same morphospecies clustered with high conspecificity scores (see colored boxes on the sides of Fig. 4). Conspecificity scores were high between sister species: P. acuta and P. damicornis, as well as between P. ligulata (Dana, 1846) and P. sp. B, and two libraries: P. verrucosa (S2) and P. eydouxi (J001), had a lot of missing data and therefore conspecificity scores near zero with all individuals. Conspecificity provides a sensitive test for introgression among taxa, and contrary to a previous study on bdelloid rotifers, where conspecificity signal away from the diagonal suggested considerable introgression between species 29 , here we find only individuals from closely related species present a high conspecificity signal (Fig. 4). Discussion Here we show that a reduced representation (RAD) genomic approach generally supports previous work using the mtORF marker 2, 6, 7, 9-14 . A similar concordance was also observed between mtORF delimitations and the complete holobiont dataset, the mitochondrial genome data, and the data that mapped onto a published coral transcriptome 25,26 . Our phylogenetic and conspecificity matrix analyses supported the reciprocal monophyly among all species represented by the individuals in our dataset, with no evidence for introgressive hybridization among most species of Pocillopora except possibly the most recently derived sister species P. damicornis and P. acuta, which show evidence of potential hybridization or incomplete lineage sorting. Neither outcome would be surprising given their median divergence age of less than a million years, but further sampling and analyses are clearly needed to infer whether hybridization is occurring or if these species are still in the process of diverging. Here, we provide evidence for reciprocal monophyly among the majority of currently recognized Pocillopora species. Although our geographic sampling is not as exhaustive as some previous studies 2, 10, 13 , we include the extremes of the geographic and morphological range in the genus to show strong concordance among a variety of different approaches that together provide support for the mtORF marker as a species level marker. The exception to this generalization is that P. meandrina (Dana, 1846) and P. eydouxi share a common haplotype (mtORF type 1) and cannot be differentiated with this marker. Obviously additional sampling is needed both across the geographic ranges of the nominal taxa, and across the range of morphological variation seen within the genus to confirm species boundaries within Pocillopora and determine whether previously unrecognized narrow range endemics or cryptic species exist. However, given the striking monophyly of the taxa included here, we predict that future sampling will reveal low level genetic variation within valid nominal species, and do not expect to see evidence of frequent hybridization between any of these species. Combosch and Vollmer 31 reported a lack of monophyly between three morphospecies sampled from the Tropical Eastern Pacific (TEP) using RAD-seq, which contrasts with our findings. Although we do not have extensive sampling of TEP lineages, some of our seven species have broad geographic coverage that includes the TEP. There are three possible reasons for this inconsistency between studies. First, morphological misidentification to species is rampant in this genus 2,9,11,12 , and it takes only a single misidentified individual in pooled samples to show mixed signal and bias the results toward introgression, therefore our RAD-seq libraries were all generated using PCR-free library preparation methods from individually barcoded individuals to eliminate this potential bias 22 . Second, Combosch and Vollmer 31 used pools of individuals based on ORF and ITS2 types, with heterozygous ITS2 types considered as likely hybrids. Pooling small numbers of individuals into a single library may result in fewer individuals per pool than mean sequencing depth 32 and PCR error and unequal representation of individuals in the pool can bias results 33,34 . Another alternative may be that heterozygous ITS2 types provide an unreliable indication of hybrid origin. Consistent with this second alternative, our findings are in agreement with the reciprocal monophyly of mtORF types reported by Combosch and Vollmer 31 , but are not consistent with individuals possessing heterozygous ITS2 types being identified as likely hybrids. Further, our results indicate that relationships reconstructed from ribosomal genes are most at odds with the remainder of the dataset. Phylogenies based on morphology, mtDNA and ITS have often been at odds (e.g., Figs 1, 2 and S2a), which has resulted in controversy over interpretation of ITS data as resulting from hybridization, or from incomplete lineage sorting 31 . Our data offer insights to this long-standing debate and suggests that ribosomal DNA clades, although sometimes useful to delineate pocilloporid species (e.g. P. ligulata in Hawai'i or Stylophora sp. A and sp. B in Madagascar 35,36 ) should not to be trusted blindly when dealing with Pocillopora species. A third potential source of bias is that among anonymous RAD-seq libraries of the holobiont, non-coral loci (e.g., contamination of coral libraries from Symbiodinium or other commensal or ingested organisms) could be misinterpreted as shared genetic variation that provides misleading evidence for hybridization. Other RAD-seq methods result in short reads that are challenging to identify via BLAST, particularly in the absence of a reference genome 34 . ezRAD is unique in this regard because it allows assembly of long contiguous portions of the genome, up to complete mtDNA genomes [37][38][39] , that can then be grouped in different subsets: comparing the results obtained from each subset adds confidence to our findings if they are congruent, as was largely the case here. De novo assembly of longer contigs allows us to ensure that some subsets of loci being analyzed originate from the coral host rather than from a symbiotic or prey contaminant. For example, comparing the subset of our loci that mapped with high confidence to transcribed genes of P. damicornis 25,26 as compared to those that mapped to either of two Symbiodinium genomes 40,41 (see supplementary materials), allowed us to compare initial phylogenetic reconstruction based on holobiont metagenomics loci (the complete anonymous locus dataset) to subsets of the data that can be positively identified as either coral host or Symbiodinium loci. The concordance of each dataset, with the exception of the ribosomal and known symbiont loci, indicates that discordant information in these data is not positively misleading, and the biological signal is strong enough to withstand noise introduced by phylogenetically unrelated sequences. Concordance between the holobiont, coral transcriptomic, coral mitochondrial, and coral SNP phylogenies presented here indicates strong support for reciprocal monophyly of each of these species, other than the most recently derived sister species. However, we cannot determine whether introgression or incomplete lineage sorting is responsible for blurring monophyly among these sister taxa for these datasets. Further, comparing the datasets (transcriptomic, mitochondrial genome, histone, ribosomal and SNPs) allows for an examination of consistency and reliability of phylogenetic reconstruction among the approaches and subsets of available data. Most of the datasets agree with previously reported mtORF designations (Fig. 2a) and provide strong support for the topology recovered by our overall holobiont dataset (Fig. 1). The most difficult species to resolve using the mtORF marker have been P. meandrina and P. eydouxi, which share the same mitochondrial haplotype (Table 1, type 1) but are distinct in microskeletal morphology 11 , and are also resolved in our phylogenetic analyses of the holobiont, transcriptomic, histone, ribosomal and SNP data, as well as by the conspecificity matrix approach (Figs 1, 3 and 4 and S1, 2). The striking outliers to general concordance of the species trees reconstructed among the datasets include: (1) the trees based on the Symbiodinium reads (but we have too few reads that map to the symbionts to place much confidence in these trees, Fig. S3), (2) the histone dataset (Fig. S2b), and (3) the ribosomal dataset (Fig. S2a), which each reveal some striking differences that likely explain some of the contradictory conclusions about species boundaries and hybridization reported in this group to date. For example, in our ribosomal dataset (Fig. S2a) we were unable to resolve P. acuta (mtORF type 5) and P. verrucosa (mtORF type 3), similar to two previous studies: Schmidt-Roach et al. 9 who were unable to resolve P. damicornis and P. verrucosa; and Pinzón et al. 2 who were unable to resolve P. damicornis, P. verrucosa, and a yet unnamed haplotype (mtORF type 7), using ITS2. Further study is needed to determine whether the discordance between the ribosomal genes most commonly used in phylogenetic studies are a peculiarity of this RAD-seq dataset or an inability to phase the nuclear genes in this approach with Pocillopora, or if this is an issue with corals in general [42][43][44] . Pocillopora corals are notorious for extreme phenotypic plasticity, and nearly continuous morphological transition from one morphospecies to another is common 11,12,19,45 . Light and water movement are among the most important variables that induce morphological change in corals 46,47 . For example, in the Gulf of California, five morphospecies of Pocillopora have been recorded 48 , however only mtORF type 1 (P. meandrina and P. eydouxi) has been documented to occur in that geographic region 8 . Additionally, Paz-García and colleagues recently documented colonies in-situ switching between three different morphospecies found in the Gulf of California (all mtORF type 1) resulting from shifts in environmental conditions in as little as six months 19 . Adding to these previous data, our results indicate that the high morphological diversity within Pocillopora is not a consequence of hybridization but is rather due to plasticity, as reported previously for the closely related genus Stylophora 36 . The exception to reciprocal monophyly among the seven species in our study was between the most recent sister species, which is expected given the young age of extant species (Fig. 5). Based on these data, the radiation that gave rise to extant Pocillopora species is estimated to have occurred less than 3 Myr ago. However, fossil evidence indicates that Pocillopora originated during the Eocene 49, 50 (56-33.9 Mya), and was one of the dominant genera in the Caribbean during the Pliocene, and most of the Pleistocene 51 . By the middle of the Pleistocene however, there was only one Caribbean Pocillopora species remaining, P. palmata (Geister, 1977), which went extinct ~82,000 years ago 51,52 . Pocillopora was rare in Indonesian Miocene assemblages 53 . Our age estimates for the radiation that gave rise to the extant members of this genus suggest that surviving Pacific Pocillopora likely experienced Scientific RepoRts | 7: 5991 | DOI:10.1038/s41598-017-06085-3 a bottleneck and subsequent rapid expansion during the Plio-Pleistocene. Major geological and climatic events between 4-2.5 Mya, such as the Northern Hemisphere glaciation, which brought with it strong glacial-interglacial cycles 54,55 , a reduction in the El Niño effect 56 , and the closure of the Isthmus of Panama 57 , most likely had a strong impact on Pocillopora species, which appear to have undergone rapid speciation. In contrast to the clear species boundaries and reciprocal monophyly of Pocillopora reported here, a recent study on the sister genus Stylophora, which also underwent recent morphological diversification in the Red Sea during the same time, indicates that it remains a syngameon united by some gene flow 58 . Today, Pocillopora species occur in 97.7% of the Indo-Pacific ecoregions 59 and show high abundance in many reefs from low to high latitudes 1 . The wide geographic distribution of this genus, despite their relatively recent origin, suggests a rapid dispersal and establishment across the entire Indo-Pacific region within less than three million years. This evolutionary success story may be facilitated by their high phenotypic plasticity, which has been suggested for other organisms, to stimulate diversification by allowing adaptation to diverse conditions 60 . Conclusion Our results indicate that species of Pocillopora are genetically distinct, but also highlight that morphological data must be supplemented with genetic data (mtORF at minimum) for accurate identification of species in this genus. The widely used mtORF marker shows promise as a species-level barcoding marker because it shows strong concordance with the reciprocal monophyly recovered in the holobiont, transcriptomic, mitochondrial, SNP, and conspecifity data. However limited resolution of this mitochondrial marker still leaves some taxa unresolved (e.g., P. meandrina and P. eydouxi) limiting its use as a universal barcode in the genus. The lack of evidence for introgressive hybridization between species here indicates that gross morphological plasticity is characteristic of Pocillopora species, and that caution should be used when interpreting poorly resolved gene trees from only a few genetic markers, particularly the commonly used ribosomal gene markers, which appear contradictory to other datasets in these analyses. Our fossil calibrated phylogeny further suggests that extant Pocillopora species are young (likely not older than ~3 Mya). This rooted phylogeny provides a template upon which ecological, demographic, life history, and population genetic questions may be further investigated to better understand the evolutionary processes that have shaped this widespread coral genus. Methods Taxon sampling. Tissue samples were collected from the Tropical East Pacific, Hawai'i, and Australia in 2013. The dataset includes 13 samples from the Pocillopora genus and two outgroup samples from closely related genera, Stylophora pistillata (Esper, 1797) and Seriatopora hystrix (Dana, 1846). All tissue samples were stored in either salt-saturated DMSO (dimethyl sulfoxide) buffer 61 or >95% ethanol until DNA was extracted. DNA extraction and quantification. Genomic DNA was extracted from tissues using the OMEGA (BIO-TEK) E-Z 96 Tissue DNA Kit but instead of the 1 × 200 µl recommended elution, 2 × 100 µl were collected in HPLC grade H 2 O in order to capture higher molecular weight genomic DNA. HPLC grade water was used instead of the supplied buffer so the sample volume could be reduced, via a speed-vac, without concentrating the salts, which might interfere with downstream steps. Extractions were inspected on a 1% agarose gel, using TAE buffer, GelRed (Biotum, Inc) gel stain and the Bioline Hyperladder 1 (200-10,000 bp). Samples were considered acceptable if there was a high band or a smear with at least half of the sample above 2,500 bp. Extractions were quantified using the AccuBlueTM (Biotium, Inc.) High Sensitivity dsDNA quantification kit with 8 standards and measured using a Molecular Devices SpectraMax M2 microplate reader at λ Ex /λ Em 485/530 nm. Library preparation. ezRAD libraries 22 were generated following the protocol of Knapp et al. (2016). Briefly, all samples were adjusted to approximately 1 µg of DNA in 25 µl based on the AccuBlue microplate readings prior to digestion by either dilution or concentration via evaporation with a speed-vac at room temperature. Genomic DNA was digested using the isoschizomer restriction enzymes MboI and Sau3AI (New England BioLab), which both cleave at GATC recognition sites. Digestions were performed in 50 µl reactions consisting of: 18 µl HPLC grade water, 5 µl Cutsmart Buffer (provided with restriction enzyme), 1 µl MboI (10 units), 1 µl Sau3AI (10 units) and 25 µl dsDNA (~1 µg) with the following thermocycler profile: 37 °C for 3 hours, then 65 °C for 20 mins. All digested samples were then cleaned using Beckman Coulter Agencourt AMPure XP purification beads at a 1:1.8 (DNA:beads) ratio following the standard protocol. The digests were run on a 1% agarose gel (as above) and were considered fully digested when there was a smear with little to no DNA above 5,000 bp. Illumina sequencing. All libraries were generated following the Illumina TruSeq Sample Prep v2 Low Throughput protocol. All libraries were size selected at 300-500 bp and passed through quality control steps (bioanalyzer and qPCR) and sequenced at the Hawai'i Institute of Marine Biology (HIMB) Genetics Core Facility (GCF). With the exception of libraries S2 and S3, which were sequenced on the MiSeq platform, all libraries were sequenced as paired-end 100 bp runs on the Genome Analyzer IIX system (GAIIx, Illumina, Inc.). Reference assemblies. Raw Illumina reads were sorted by barcode and lists of paired reads were trimmed on both the 5′ and 3′ ends for the adapter sequences using TRIM GALORE! (Andrews 2010). A PHRED score of 20 was used for all libraries. Both paired and unpaired reads were kept but reads <99 bp in length were discarded. Paired reads were validated and then merged using PEAR v0.9.6 with default settings 62 . Merged and non-overlapping reads were concatenated into a single file for each library for the 'holobiont' dataset. Below we describe how subsets of these reads were gathered into the 'transcriptome' and 'symbiont' datasets. To generate the transcriptome dataset holobiont libraries were mapped to the Pocillopora damicornis transcriptome, which consists of 29,875 contigs 25 , using BWA v0.7.12 63 with the MEM algorithm for single reads and default parameters (with the exception of restricting the output to only map scores of 10 and higher). SAM files were converted to BAM files using SAMTOOLS 64 and BAM files were converted to FastQ files using BEDTools 65 . Consensus sequences were generated by clustering in pyRAD using the same parameters as were used for the holobiont libraries. Library S2 was one of the highest quality libraries and was selected for de-novo assemblies. Assemblies were conducted using the GENEIOUS v 8.1.4 assembler with the de-novo low sensitivity/fast settings. Contigs >200 bp were compared against a local version of the National Center for Biological Information (NCBI) GenBank nt database which was downloaded on 4/13/2015 using the Basic Local Alignment Search Tool (BLAST) Megablast program 23 to identify loci and to avoid contigs that may arise from assembly artifacts, or chimeric assemblies from multiple portions of the coral holobiont. The contigs were sorted by e-scores and the consensus sequence of one particularly long contig with high coverage and long blast hits (a contig blasting to coral histone proteins 2, 3, and 4; 4,519 bp) was selected to serve as a reference sequence. All libraries were assembled to this reference sequence using the default parameters (high sensitivity iterated up to five times and the medium/read mapping settings) in GENEIOUS v8.1.4. Consensus sequences were made from each library (not including the reference sequence) using the 75% majority option and N's were called if coverage was 2X or less. Multiple sequence alignments were constructed using MUSCLE 66 for the complete holobiont dataset and reads that mapped either to the transcriptome, the mitochondrial genome, the histone marker, or the ribosomal region. With EXABAYES v1.4.1 default parameters were used for both the holobiont and transcriptomic data, and default parameters were used for the mitochondrial, histone, and ribosomal data with the exception that 10,000,000 generations were sampled. By default, EXABAYES v1.4.1 applies the GTR model for nucleotide evolution with 1,000,000 generations, sampling frequency of 500, and a burn-in of 2,000 generations. Final trees were produced using CONSENSE in EXABAYES v1.4.1, which generates a consensus of all sampled trees after burn-in. Trees were visualized in FigTree v1.4.2 (http://tree.bio.ed.ac.uk/software/ figtree/). For all of our Maximum Likelihood analyses (RAxML 8.1.15 69 ) we used the GTRGAMMA model of nucleotide evolution and conducted a rapid bootstrap analysis and search for the best scoring tree in a single run (-f a). For our holobiont and transcriptomic data we used 100 rapid bootstrap replicates to estimate clade support, for the mitochondrial genomes we used 1000 rapid bootstrap replicates to estimate clade support, and for the histone and ribosomal data we used 10,000 bootstrap replicates to estimate clade support. Trees were visualized in FigTree v1.4.2. SNAPP Analysis. Species trees were estimated from single nucleotide polymorphism (SNP) data drawn from the holobiont dataset, which was analyzed using the SNAPP package in BEAST2 28 . To generate the unlinked, biallelic SNPs, required by SNAPP, we used contigs of length 140bp-300bp of the library, S2, to generate a reference against which the all other libraries were aligned. This reference was generated by dereplicating contigs using Rainbow 70 and clustering contigs using VSEARCH (-cluster_smallmem). As this process outputs both consensus and centroid sequences, we extracted the consensus sequences and indexed them to use as a reference building with SAMTOOLS 64 and BWA v0.7.12 63 . Each holobiont library was then mapped to this reference using BWA v0.7.12 63 with settings described above. The resulting SAM files were converted to BAM format using SAMTOOLS 64 , and read group information was added using PICARD (http://broadinstitute.github.io/picard/). The GENOME ANALYSIS TOOLKIT (GATK) 71 was used to re-align around indels. Following dDocent 72 , we used FREEBAYES 73 to call variants (−0 -E 3 -G 5 -z.1 -X -u -n 4 -! 10-min-repeat-entropy 1 -V -b) and filtered our VCF file to remove indels and extract unlinked, biallelic SNPs using VCFTOOLS 74 (-min-meanDP 3-remove-indels-thin 300-remove-indv SD6-remove-indv SS1-max-missing 1). Both outgroups were removed for this analysis to allow more SNPs to be recovered within the ingroup, which resulted in 430 high quality informative SNPs shared across all Pocillopora libraries. The VCF file was converted to binary nexus format using PGDSpider v.2.0.9.1 75 . In BEAUti all taxa were treated as distinct species and the priors, u and v, were calculated from the data. In BEAST 2.3.2 the MCMC chain was run for 3,000,000 generations, sampling every 1,000 generations 28 . Convergence was assessed in TRACER 76 and the first 10% was removed as burn-in. Divergence time estimation. Mitochondrial 66 with five iterations and were then checked manually. From this alignment, four mitochondrial gene regions (COX1: 1,549 bp; ND5 CDS: 12,937 bp; large ribosomal subunit: 1,972 bp; ATP8: 220 bp) were extracted and the appropriate model of nucleotide evolution was determined to be GTR for each region based on AICc scores using jModeltest 2.1.4 78 . For divergence estimates we used the gamma site model, with gamma category count set to 4, the relaxed clock log normal, and the birth death model in BEAST 2.3.2 28 . We constrained the age of the Madracis node to the Lower Campanian (83.6 MYA) using the gamma prior (Alpha = 2.0, Beta = 2.0, Offset = 80.0) based on the fossil record of Madracis johnwellsi, which first appears in Tibet during this time 30 . The Markov Chain Monte Carlo was run for 40,000,000 generations, storing every 1000 generations. Convergence and mixing were checked using Tracer v1.6 76 , then 10% of trees were discarded as burn-in, and the maximum clade credibility tree with median node heights was generated with TreeAnnotator v.2.3.2 28 . Data Availability. Raw genetic data is available through the short read archive at NCBI, BioProject PRJNA386062, and final DNA alignments of the following are available as Supplementary Material: • Mitochondrial genome alignments • Histone alignment • rDNA alignment
v3-fos-license
2019-09-13T18:28:51.740Z
2019-01-01T00:00:00.000
203188352
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.iiste.org/Journals/index.php/EJBM/article/download/47436/48977", "pdf_hash": "0eb83769e46a309990217fdd3cf986f294e5d5f1", "pdf_src": "Unpaywall", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1400", "s2fieldsofstudy": [ "Business" ], "sha1": "0eb83769e46a309990217fdd3cf986f294e5d5f1", "year": 2019 }
pes2o/s2orc
Impact of Product Packaging Elements on Consumer Purchase Notion: A Study FMCG Items The main intention of this paper is to investigate the impact of products packaging elements on the purchase behaviour of consumers and then analysis the consumers purchasing capability. The aim of this research is to study the elements of product packaging. This research paper seeks to examine the need to understand consumer purchase notion in order to correctly design product packing elements and to achieve the desired position in the minds of consumers. Companies in order to create the right packaging for their products, they must understand the consumer buying process and understand the impact of packaging elements as a variable that can influence the purchase decision of consumers. The prevailing research paper examine focuses on the impact of packaging elements on buy notion of customers with regard FMCG items, in which in the scope of study changed into limited to Hyderabad town. A shape questionnaire was used to degree the effect of packaging elements. With pattern size 825 respondents and examined thru descriptive information, chances, ANOVA, correlation and multiple regression analysis by means of the usage of SPSS 20.0 version. The end result of the look at showed that media Vehicles like TV, newspaper and magazine and net statistically considerable on purchasing patron in the direction of FMCG packaging. Followed by the product packaging elements have statistical significant on the purchase notion of consumers, but element like back ground of packaging is not statistically significant on the purchase notion. Keywords: Packaging elements, Consumers, FMCG, Purchase notion, Background colour. DOI : 10.7176/EJBM/11-10-06 Publication date : April 30 th 2019 to have a look at reveals the self-carrier and changing clients' way of life having the last impact on purchaser choice. Increase in impulse shopping for conduct labeling is also speaking to the purchaser. Saeed, Lodhi, Mukhtar, Hussain, Mahmood and Ahmad (2013), combine the logo photo, brand attachment and environmental consequences and their impact on purchaser purchase selection. (consequences) elaborates that emblem photo don't have a effective relation with buy decision, emblem attachment has a moderate superb relation with buy selection and environmental consequences but don't have a positive relation with buy selection (Ahmed & Kazim, 2011). The purchasers buy extra quantity of the products, after looking a nicely-classified product. Therefore, labeling impacts the patron shopping for behavior, however there are some different elements additionally, which affect the patron shopping for conduct (Saeed, Lodhi, Rauf, Rana, Mahmood & Ahmed, 2013). In recent times, humans have become extra worried in the direction of green shopping because of a grown cognizance for environmental protection. Inexperienced buying is basically the act of buying environmental friendly products. The research model in this take a look at examines the effects of predictor variables (environmental situation, organizational green image and environmental expertise) upon criterion variable (inexperienced buy goal) with the moderating impact of perceived product rate and first-class Rettie & Brewer, 2000;Barber, Almanza, & Donovan, 2006). Adelina & Morgan (2007) conclude that Packaging can be treated as one of the most valuable tool in this day's advertising communications; Packaging has a vital effect on customers shopping for conduct. The effect of packaging and its elements can impact the patron's buy selection (Ahmed et.al, 2014). In keeping with Karbasivar & Yarahmadi (2011), greater clothing impulse shopping for and promotional methods (cash cut-price) utilization among sample, in addition to in-save shape show (window display) has important function to encourage purchasers to shop for impulse. They can present complementary products to encourage patron to buy impulse. Also, sealers can boom clothing impulse shopping for with redecorating their stores in modern-day fashion and use appealing lights and colourings. The end result of the take a look at proves that there's a pivotal courting between window display, credit score card, promotional sports (discount, unfastened product) and customer impulse buying behaviour (Alice, 2006). In line with Erzsebet & Zoltan (2007) each the qualitative and quantitative studies confirmed that respondents adopted similar threat reduction strategies of their purchase of infant care products. These studies investigated consumer perceptions and buying behaviour of baby care products. The consequences of the number one research showed those consumers' needs glad about the product in phrases of reliability and performance and packaging. In keeping with Butkeviciene, Stravinskiene and A. Rutelione (2008), impulse buying is indeed an applicable component in CE retailing, for this reason justifying using sales packaging. But, optimization is still important. From a cheap and environmental attitude it is very steeply-priced to apply income packaging (with additional fabric use and transport extent) to products that don't want them, or to use them in an useless way. Saeed, Lodhi, Mukhtar, Hussain, Mahmood and Ahmad (2011), combine the emblem picture, brand attachment and environmental outcomes and their impact on client buy decision. Elaborates that emblem image don't have an effective relation with purchase selection, brand attachment has a moderate superb relation with purchase decision and environmental consequences however don't have a high quality relation with purchase choice (Ahmed, Arif & Meenai, 2012). Objectives of the Study Following are the primary objective of the study  To study the types of Media vehicles that influence consumer purchase behaviour towards the packaging of FMCG products.  To examine the impact of product packaging elements influence on consumer purchase behaviour towards FMCG products Hypothesis of the Study Following are the hypothesis of the study  HO1: There is no significant influence of media vehicles on consumer purchase behaviour towards packaging of FMCG products.  HO2: There is no significant impact of Product Packaging Elements on Consumer Purchase behaviour. Scope of the Study From the scene various investigations of the literature review and framed conceptualization created by the scholars. They are focusing mainly on the bundling terms and their performance. Just a few studies ware done on bundling in the Indian context with the FMCG sector. Only some of the studies said about the spirited strategies on the role of the bundling in the changed competitive situation of the market in the Indian markets. The present study is mainly focusing on the impact of product packaging, strategies of packaging which have been implemented by the FMCG sector. And what are the strategies applied by the Marketer to influence of the consumer purchase behaviour with the product packaging? And also helpful to see the impact of strategies of product packaging on the consumer purchase notion towards the FMCG sector. Significance of the Concept With the end goal of the present examination, bundling is be conceptualized as "Bundling includes advancing, securing and upgrading the item". Bundling advances the items by pulling into consideration. The principal limited time assignment of the bundle is to pull in consideration. Since discernment is particular, the bundle ought to be intended to draw in consideration in an outwardly jumbled condition. It ought to likewise educate the purchaser about the item. Bundles add to moment acknowledgment of organization or brand and induce the purchaser to get it. The bundle imparts more to the buyer than the real item, at the purpose of procurement where the customer chooses. The bundle must pass on the privilege of enthusiastic characteristics about the item that it fills the customer's need. Packaging having a greater role to influence the consumers by the greater image, it contains the ideal information, characteristics, and advantages of the product. Product packaging is one of the strategies of every organization, it is the internal strategy that increasing sales flow by attracting more customers. Because of most of the customers purchasing and judging the product by its bundling pattern before buying the product. Methodology  Research Design: Descriptive research  Sources of data: The study is concerned with the consumer perception and product packaging strategies, based on that source of the data collected from Primary source of data is collected from the respondents through structured questionnaire and interviews. It was in order to collect data on the product packaging strategies which affect on consumer purchase perception. Secondary data is collected from various Journals, Periodicals such as Magazines, Business newspapers, and from subject related books and websites. 21.8%, 16.6% and 5.6% by the age of 31-40 years, below 20 years, 41-50 years and 51 and above years respectively.  Gender: From the above desk it is evident that 70.5% of the whole respondents belong to male accompanied by 29.5% through female respective.  Education: It is found from above table, that extra than 33.1% of the respondents belongs to pg, and followed with 28.4%, 17.1%, 12.1%, and 9.3% belongs to degree, pg and above, intermediate and SSC respectively.  Occupation: It is evident from above table, that more than 36.2% of the respondents working as private employee, and it are observed that 30.5%, 11.9%, 11.2% and 10.2% working as govt employee, Business, Student and Homemaker respectively.  Income in rupees: 38.5% of family have an income between 30,001-40,000 followed by 24.6%, 15.2%, 11.3% and 10.4% with the income of 40,001-50,000, 20,001-30,000, and 50,001 and above and below 20,000 have family income level of respondents. Conceptual Framework Image: 1 The above table reveals that any mean difference between the two sets of variables like media exposure and consumer purchase behaviour, and it indicated that 764.381 is the between-group variation in the group of TV as a source of information and consumer purchase behaviour and 764.381 is the within-group variation of TV and Purchase behaviour of consumers. The box also reveals that F-distribution .858, Followed by, the level of significance is .05 is less than 0.05. Through this, the Alternative hypothesis accepted and the null hypothesis is rejected. The result shows that there is a significant influence of TV exposure on consumer purchasing FMCG products. And followed with media exposure channels like Radio, Newspaper, and Magazine, Outdoor, and Internet of between groups variations are 20. 409, 17.800, 17.117, 16.293 In order to understand whether there is any significant impact of Packaging elements of products with respect to consumer purchase Behaviour. It is observed that from the above table, packaging element like Colour of packaging (E1) and consumer purchase behaviour difference in the sum of the squares and 21.344 is the variation in the between the group and, this value is small because of the mean values are closed in between the groups. 876.251 is the variation within the group and the value of F-distribution is 1.156. Finally, the level of significance value is 0.002, which is smaller than 0.05. So the results indicate that the null hypothesis can be rejected. So Colour of Packaging (E1) impacts on Consumer purchase behaviour with respect to FMCG products. And followed with packaging elements like Background image of Packaging(E2),Materials of Packaging(E3), Printed information(E4), Innovation of packaging(E5), Label of Packaging(E6), Quality of Packaging(E7), Design of Packaging(E8), Language used on Packaging(E9), Brand image on the Packaging (E10) of between groups variations are 27. 922, 17.052, 8.680, 9.279, 23.390, 12.640, 16.380, 26.080 and 21.348, their Within group variations are 823.525, 459.049, 660.957, 968.170, 806.930, 1341.825, 674.536, 980.461, 710.975. F-Statistic values are 1.610, 1.763, .623, .455, 1.376, .447, 1.153, 1.263 and 1.425. Followed with significant level are 0.56, 0.009, 0.000, 0.001, 0.001, 0.002, 0.005, 0.000, and 0.000. These respected significance values are smaller than 0.05. Finally, table values indicate that the alternative hypothesis accepted and the null hypothesis rejected on the bases of the results. The Packaging elements dimensions are impacted on consumer purchase behaviour towards FMCG products, but packaging element like Background image of Packaging (E2) is not a significant impact on consumer purchase behaviour, because of its significance level is 0.056 and it is more than 0.05. Language used on Packaging .312 ** 10 Correlations between Packaging Elements variable with Purchase behaviour of the Final consumers: Brand image on the Packaging .443 ** ** Correlation is significant at the 0.001level (2-tailed) The Elements of the packaging like Colour of Packaging (r=.542 ** ), Printed information (r=.612 ** ) and Quality of Packaging (r=.513 ** ) are having strongly correlations with the Consumer purchase behaviour. Whereas, elements like Label of Packaging (r=.311 ** ), Language used on Packaging (r=.312 ** ) having weak correlation with consumer purchase behaviour. The above box reveal that the value of F-distribution is statistically significant, therefore the following statement i.e. the null hypothesis rejected and the alternative hypothesis. So the result implies that there is a variation of difference of variation caused by the independent variables on the dependent variables. Managerial Implications The entrepreneurs ought to come with modern and precise packaging while they may be launching new merchandise in the market. Mass Media play an important function to persuade more on clients purchase behaviour. So this main factor, marketers preserve excessive great of video, audio, clarity of message about product packaging. If the commercial enterprise or manufactures are retaining proper elements of packaging with cautious examination, it facilitates to construct and generate the extra effective income the best one way is there to get greater proper results in an powerful manner, it occurs by way of constructing the powerful techniques planning of packaging of their respective product segments. Limitations of the Research No research is without certain limitation specifically in the case of surveys conducted through structured questionnaires or personal interviews. So, the outcomes of the present study are limited or geographically responses may vary. By giving attention to these apparent reasons, we can see the following limitations: The sample is drawn from Hyderabad city; therefore, the sample may not represent the whole population. Hence, the limitation of generalization will be there. A sample size of 825 respondents has been selected in and around Hyderabad city as related to the universe, bias respondents responses, deficiency of published / unpublished literature on Product packaging strategies and time restriction could be some limitations. This study is not consider total FMCG product packaging, Since few categories in FMCG products are considered, like Personal care products (Cosmetics), Dairy products (Milk, Ghee, Ice cream), Food products (Biscuits, Bread, Cakes) and Beverages (Soft drinks & Energy drinks) and only to evaluate the packaging strategies impact on the purchase behaviour of final selected respondents. Personal bias of respondents while answering the question may have skewed the results slightly, although an effort has been made to verify the results through all sorts of quantitative and qualitative. Conclusions This study attempted to explore the Impact of product packaging elements on consumer purchasing behaviour and try to comprehend its influence in their decision making by attempting to consolidate the various view points to reach at a conclusion which can better explain the notion of rationality and at the same time the act of consumption. For the past days, packaging was viewed just as a box/container or outer covering, but packaging has various tasks to playing at this point. As for the consequences confirmed that media exposure like TV, radio, newspaper and magazine and internet statistically sizable on patron belief in the direction of FMCG packaging, besides Radio. And packaging elements have advantageous effect at the patron towards purchase belief. But back ground of the packaging not influence on buying of behaviour of consumers. Ultimately, the researcher concluded that the entrepreneurs observed greater effective product packaging elements and techniques in market for attracting and rendering new and existed customers for increasing enterprise and product marketplace proportion.
v3-fos-license
2017-06-17T19:06:09.005Z
2014-05-27T00:00:00.000
6695248
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0095657&type=printable", "pdf_hash": "fad863a3bd339f769e334ae3d0afe099f72553d3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1402", "s2fieldsofstudy": [ "Psychology" ], "sha1": "fad863a3bd339f769e334ae3d0afe099f72553d3", "year": 2014 }
pes2o/s2orc
Alpha-Theta Effects Associated with Ageing during the Stroop Test The Stroop effect is considered as a standard attentional measure to study conflict resolution in humans. The response of the brain to conflict is supposed to change over time and it is impaired in certain pathological conditions. Neuropsychological Stroop test measures have been complemented with electroencephalography (EEG) techniques to evaluate the mechanisms in the brain that underlie conflict resolution from the age of 20 to 70. To study the changes in EEG activity during life, we recruited a large sample of healthy subjects of different ages that included 90 healthy individuals, divided by age into decade intervals, which performed the Stroop test while recording a 14 channel EEG. The results highlighted an interaction between age and stimulus that was focused on the prefrontal (Alpha and Theta band) and Occipital (Alpha band) areas. We concluded that behavioural Stroop interference is directly influenced by opposing Alpha and Theta activity and evolves across the decades of life. Introduction The classical Stroop test [1] is an executive task used to evaluate prefrontal function that can be applied during the life span of healthy individuals and in neurological pathologies, such as Parkinson's disease [2], Alzheimer's disease [3] and schizophrenia [4]. The test involves the presentation of a series of words (colour names) written in different coloured inks. The ink colour (chromatic information) and the colour's name (semantic information) may be the same (congruent target) or different (incongruent target), demanding the resolution of a cognitive conflict. Accordingly, the subject must respond in function of the ink colour and not the word meaning or the semantic information, and overcoming the automatic response of reading the word produces a delay in the response known as the Stroop interference or Stroop effect. Some studies suggested that this interference may already occur at the stimulus processing stage [5], an hypothesis that can be verified by measuring evoked response potentials (ERPs) by electroencephalography (EEG), comparing the intensity and signal delays between congruent and incongruent targets [6]. EEG recordings showed that incongruent stimuli have no effect on the amplitude or latency of the P300 component -the cognitive evoked potential- [7], although they induce stronger negativity at around 400 ms than neutral stimuli [8]. This would suggest that interference analysis occurs quite late in time, closer to the response stage than to the stimulus processing stage. The specific nature of the Stroop effect can also be studied by instantaneous coherence analysis based on a Fast Fourier Transformation (FFT) and in relation to band frequency studies. It was proposed that the 13-20 Hz frequency band was sensitive to discrimination between the congruent and incongruent items, and that higher coherence was observed within the left frontal and left parietal areas [9][10][11]. This is consistent with more recent findings regarding coherence within a time interval of 100-400 ms at 13-18 Hz, which was higher for incongruent situations than for congruent situations in frontal, central and parietal regions without signalling hemisphere. Regarding other bands, increased in the frequency band of 8-10 Hz activity was observed within the prefrontal and parietal areas during the Stroop task, and an interaction was assumed between prefrontal and parietal areas [12]. The location of that effect has also been studied using functional neuroimaging, and the results linked selective attention to activity within the dorsolateral prefrontal cortex (DLPFC) and the anterior cingulate cortex (ACC). However, the relative contribution of specific regions involved in the Stroop task remains a continuing source of debate [13]. A number of studies have led to the hypotheses that the left DLPFC may be involved in representing and maintaining the attention demands in this task, while response-related activity is associated with the ACC [14,15]. Regarding the age effect, there is evidence of age-related increases in interference costs. Indeed, ERP studies showed that the peak latency of the P3 wave was delayed in incongruent trials with respect to congruent ones and that this increase was greater for older rather than younger adults. Comparative studies between young and old populations suggested that age differences in the Stroop interference effect can be explained by a general functional slowing down in the older population, which increases Stroop interference [16,17]. The purpose of this study was to describe the EEG components involved in Stroop interference, the type of band changes and where they occur within the scalp during the lifetime of individuals. We hypothesized that some bands would remain strong and stable throughout life, while others would show significant changes in older rather than younger subjects. The description of this process should improve our understanding of pathological conditions related to ageing in which attention is severally impaired, such as Parkinson's disease. Subjects Ninety healthy volunteers took part in this study, divided into five groups according to each age decade (n 20-29 = 17, n 30-39 = 20, n 40-49 = 17, n 50-59 = 18, and n 60-69 = 18). All volunteers were right-handed, as determined by the Edinburgh Handedness Inventory [18] and they had no clinical history of neurological diseases. The University of Murcia ethics committees approved this study. All participants were informed about the aims of the study and the confidential conditions. They also signed an agreement document, in according with the Ethics Committee of the University of Murcia (Spain), where the EEG tests were carried out. Paradigm During the EEG tests, subjects were asked to resolve a modified version of the Stroop test [19], as used in previous studies [2] and which involved two kinds of stimuli: 1) incongruent targets, colour names printed in incongruently coloured ink (i.e. Rojo [red in English] written in green ink, Verde [green in English] written in blue ink, Azul [blue in English] written in red ink); and 2) congruent targets, animal names always printed in the same colour (e.g. Alce [moose in English] written in blue ink; Rana [frog in English] written in red ink; Visón [mink in English] written in green ink). Experimental situation The experiment was carried out in an electrically-shielded sound-attenuating room. Participants were instructed to answer as soon as possible and to avoid body movement during the recording. Each subject sat on a sofa in the individual sessions, and the stimuli were presented on a plasma TV screen (Samsung LE-32A457, 32 inch, Widescreen, LCD, HD Ready) connected to the main computer and situated 60 cm in front of the sofa. The subject held the experimental keyboard (LUMINA PAD from Cedrus company, model LU430-3B) in his/her right hand and the presentation of the stimuli was carried out using the Transdatix S.L. software, which also allows the responses to be recorded (reaction time and correct/incorrect/missing answers). Subjects used a 3key keyboard: one red, one blue and one green. The stimuli were presented alternatively as 9 trains of 10 congruent stimuli and 9 trains of 10 incongruent stimuli. Each stimulus lasted 3,000 msec, during which time the subject had to reply by pressing the right key. No feedback was provided. EEG recording EEGs were recorded continuously using a BrainAmp standard EEG amplifier (256 Hz sampling rate; 0.1-39.9 Hz analogue band pass; resolution 0.5 mv: Brainproducts, Munich, Germany) and a BrainCap with 14 electrodes (Fp 1 , Fp 2 , Fz, C 1 , C 2 , C 3 , C 4 , Cz, T 3 , T 4 , Pz, O 1 and O 2 ) relative to a specific reference electrode within the cap between Afz and Fz. The ground electrode was situated between Fz and Cz. A vertical electrooculogram (VEOG) was recorded from electrodes attached above and below the left eye, and the horizontal electrooculogram (HEOG) was obtained from the outer canthi of both eyes (Lansbergen, Kenemans, 2008). The electrode impedance was kept below 5 kV, and the EEG and the EOG signals were online band pass filtered (DC-50 Hz, 50 Hz notch filter). Data Analysis Power spectra were computed across the inter-trial interval. EEG time series were divided into non-overlapping 3,000-ms-long windows, beginning at 0 ms post-response. Power spectra were obtained for each window using the Fast Fourier Transform (FFT) by a cosine windowing method. Spectra for each window were averaged separately for congruent and incongruent trials. Statistical analyses were carried out using long-transform mean power values in each frequency band (from 1 to 32 Hz) for the position of all the electrodes. The data from six subjects were rejected due to technical reasons. Repeated-measures ANOVA (rm-ANOVA) included all 14 electrodes as within-subjects factor and group, band and stimuli (congruent, incongruent and resting) as betweensubject factors followed by Bonferroni post hoc analysis. Furthermore, separate one-way ANOVA was used to assess performance in Stroop test: 1) efficiency was measured as ratio of correct responses (number of correct responses/total number of responses), including colour as a within-subject factor and group as the between-subject factor; 2) reaction time (RT) was evaluated including colour as a within-subject factor and group as the between-subject factor (see Figure 1). Results Rm-ANOVA showed no significant effect of the group factor (F,1). There was significant effect of the within-subject factor band [F(4,890) = 1164.463, p,.001] (Table 1), with the Beta rhythm reaching higher values than the rest of the bands (p,.001), Band results The post hoc analysis indicated the significant effects indicated below. Discussion In this study we have analysed the changes in Stroop interference at different stages in the life of individuals, analysing the responses within specific bands at electrodes placed at different locations during Stroop task performance. Our results suggest that a complex combination of changes in the Alpha and Theta bands evolve between ages in the 20's until the 70's together with a progressive increase in latency of response between congruent and incongruent items. In general terms, alertness is characterized by reduced Alpha activity and increases in the rest of the frequencies [20]. In particular, increase in Theta activity is related to information processing and contributes to cognitive function such as memory encoding engagement [21,22], learning [23] and creativity processing [24]. During conflict resolution, the reduction in Alpha activity corresponded to diffuse electrical inhibition over the scalp that was required to resolve any demanding cognitive tasks. This process is essential to guarantee a correct analysis of the information and correct processing, mainly within parietal areas. In our study, conflict solving produced a reduction of Alpha waves in the right occipital lobe (O2), particularly in older groups (those in their 60's) and an increase in the Theta frequency at Fp2. The location O2 corresponds to the parietal homotypical isocortex from the parieto-occipital region, which is involved in resuming and processing visual information. Simultaneously, Theta activity increased within the prefrontal lobes (Fp1 and Fp2), frontal lobes (F3 and C3), frontal sagittal line (Fz) and central sagittal line (Cz), being higher at Fp2. The Right Frontopolar cortex (Fp2) is essential for the processing of information received from the associative cortex, and is in continuous exchange with memory areas [25]. Petrides [26] described the implication of prefrontal areas during the Stroop test: the anterior fronto-basal region (area 11) is involved in novelty flagging (this requires memory connexions, [27]) and the posterior fronto-basal region (area 13) contributes to the meaning analysis of the stimulus; in case of incongruence, signal analysis would require the activation of areas 11 and 13, which are connected to area 25 (Subgenual), the amygdala entorhinal and the perirhinal cortex for meaning elaboration [28]. The peak of Theta activity under Fp2 resembles the confluence of lateral prefrontal and anterior fronto-basal electrical fields, which respond to data comparison and memory processing. Theta activity increase was also found under electrode Fz, which corresponds to the confluence of area 13 (posterior fronto-basal cortex) and areas 32a and 24 (anterior pregenual cortex, limbic system) electrical fields. Cz electrode activity corresponds to areas 32b and 24b (anterior cingulate cortex), as described for both congruent and incongruent items using fMRI and PET techniques [2,29,30]. Further areas, such as the left premotor area (F3) and the left motor area (C3), would stand for the motor response, performed by pressing a button with the right hand. Such a complex and long process, particularly in incongruent analysis, originates longer reaction time and more errors, leading to the so-called ''Stroop effect''. These results are in agreement with the relationships previously identified between central executive, working memory processes and fronto-parietal electrode coupling [31]. Moreover, the general functional scheme used here matches that applied in a previous study where similar relationships between central executive and working memory processes, and fronto-parietal electrode coupling were described [32]. Regarding the specific location of the changes in EEG signal, our results confirm that Stroop interference involves the right frontal cortex (lateral and basal prefrontal areas -Fp2-), and posterior fronto-sagittal ones, Fz, as proposed from previous fMRI clinical studies in healthy controls and patients with schizophrenia [33]. Considering the whole pool of data by decades across the sample and irrespective of age, reaction time was significantly shorter for congruent than for incongruent items. Regarding the effect of age in relation with the Stroop incongruence, our results indicated that older adults have a longer reaction time for both congruent and incongruent items. However, response time was significantly shorter for the younger participants than for the older participants on congruent and incongruent items (younger: 700 ms for congruent and 825 ms for incongruent; older: 725 ms for congruent and 1250 ms for incongruent). These data suggest an increase of 25 ms/decade for congruent and 85 ms/ decade for incongruent items, probably due to the contribution of different aging processes that may start from age 20. These is in agreement with previous studies: the Stroop test in different age groups reported decreases in reaction times to incongruent stimuli from 30 to 20 years of age (20.5 z scores) and these times start to increase from 40 years of age onwards, at a rate of 0.2 z-scores/ decade [34]. Consistent with well-described anatomical changes, Stroop interference reaches adaptive levels relatively early in childhood (6-7 years), although control interference continues to develop into late adolescence [35]. In fact, 10-12 year-old subjects are still more susceptible to interference errors than adults [36]. Likewise, previous studies revealed age-related differences within Correct Response Negativity (CRN) amplitude and CRN amplitude was larger after incongruent than congruent Stroop stimuli in young adults, whereas older adults showed greater amplitude of CRN in both incompatible as well as compatible trials. Hence, there appeared to be an age-related impairment in (post-)response conflict [37][38][39]. These effects are connected with other agerelated anatomical changes in the brain associated with age, such as the increased of ventricle volume (10-15%) from the 40 th to the 80 th decade in the healthy population [40]. Standard ageing processes may start age 20 [24], although some ageing parameters such as myelinisation increase through age 40 and in some cases until age 60, especially in the intracortical horizontal plexuses [41]. Within the central nervous system, standard ageing changes include microcirculation decrease [42,43], ventricles enlargement [44], white matter reduction [45][46][47] and encephalic weight loss [48]. Slowing in responses with age may be well attributed to standard ageing changes in the central nervous system. In summary, our results highlight the cortical areas involved in conflict resolution, implicating the right posterior parietal or occipito-parietal, the fronto-basal and the left ACC. The EEG frequency that best defines the engagement of these areas is represented by the appearance of 4-6 Hz Theta activity in Fp2 (some peaks of which may reach 9 Hz), and the simultaneous reduction of Alpha and Beta rhythms. The brain's ability to swap these EEG activity bands is crucial to achieve efficient performance [31]. Such a key combination seems not to be optimal in the 20's but rather in the 30's, and from then on this starts to decline until the 60 th decade as healthy aging occurs. We reckon that our results enhance our current knowledge on the EEG changes that take place under cognitive demanding conditions during the life span of an individual. However, further studies should consider to increase the number of electrodes [49] and apply basal interpolation software to identify the contribution of each EEG component. These data are being used in the EXOLEGS project, which aims to improve the capacity of autonomy of elderly and impaired people by the application of user interface techniques, focusing mainly in the chapter of Brain Computer Interface.
v3-fos-license
2018-01-25T18:10:24.876Z
2018-01-25T00:00:00.000
28261395
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.01991/pdf", "pdf_hash": "1786283e16c074955350851837ab7ade04633363", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1404", "s2fieldsofstudy": [ "Biology" ], "sha1": "1786283e16c074955350851837ab7ade04633363", "year": 2017 }
pes2o/s2orc
Immunosuppressive Mesenchymal Stromal Cells Derived from Human-Induced Pluripotent Stem Cells Induce Human Regulatory T Cells In Vitro and In Vivo Despite mesenchymal stromal cells (MSCs) are considered as a promising source of cells to modulate immune functions on cells from innate and adaptive immune systems, their clinical use remains restricted (few number, limited in vitro expansion, absence of a full phenotypic characterization, few insights on their in vivo fate). Standardized MSCs derived in vitro from human-induced pluripotent stem (huIPS) cells, remediating part of these issues, are considered as well as a valuable tool for therapeutic approaches, but their functions remained to be fully characterized. We generated multipotent MSCs derived from huiPS cells (huiPS-MSCs), and focusing on their immunosuppressive activity, we showed that human T-cell activation in coculture with huiPS-MSCs was significantly reduced. We also observed the generation of functional CD4+ FoxP3+ regulatory T (Treg) cells. Further tested in vivo in a model of human T-cell expansion in immune-deficient NSG mice, huiPS-MSCs immunosuppressive activity prevented the circulation and the accumulation of activated human T cells. Intracytoplasmic labeling of cytokines produced by the recovered T cells showed reduced percentages of human-differentiated T cells producing Th1 inflammatory cytokines. By contrast, T cells producing IL-10 and FoxP3+-Treg cells, absent in non-treated animals, were detected in huiPS-MSCs treated mice. For the first time, these results highlight the immunosuppressive activity of the huiPS-MSCs on human T-cell stimulation with a concomitant generation of human Treg cells in vivo. They may favor the development of new tools and strategies based on the use of huiPS cells and their derivatives for the induction of immune tolerance. Despite mesenchymal stromal cells (MSCs) are considered as a promising source of cells to modulate immune functions on cells from innate and adaptive immune systems, their clinical use remains restricted (few number, limited in vitro expansion, absence of a full phenotypic characterization, few insights on their in vivo fate). Standardized MSCs derived in vitro from human-induced pluripotent stem (huIPS) cells, remediating part of these issues, are considered as well as a valuable tool for therapeutic approaches, but their functions remained to be fully characterized. We generated multipotent MSCs derived from huiPS cells (huiPS-MSCs), and focusing on their immunosuppressive activity, we showed that human T-cell activation in coculture with huiPS-MSCs was significantly reduced. We also observed the generation of functional CD4 + FoxP3 + regulatory T (Treg) cells. Further tested in vivo in a model of human T-cell expansion in immune-deficient NSG mice, huiPS-MSCs immunosuppressive activity prevented the circulation and the accumulation of activated human T cells. Intracytoplasmic labeling of cytokines produced by the recovered T cells showed reduced percentages of human-differentiated T cells producing Th1 inflammatory cytokines. By contrast, T cells producing IL-10 and FoxP3 + -Treg cells, absent in non-treated animals, were detected in huiPS-MSCs treated mice. For the first time, these results highlight the immunosuppressive activity of the huiPS-MSCs on human T-cell stimulation with a concomitant generation of human Treg cells in vivo. They may favor the development of new tools and strategies based on the use of huiPS cells and their derivatives for the induction of immune tolerance. Keywords: induced pluripotent stem cells, mesenchymal stromal cells, human T-cell immunosuppression, regulatory T cells, humanized nsg mouse, tolerance inTrODUcTiOn Among the different cells potentially used in regenerative medicine, the mesenchymal stromal cells (MSCs) are viewed as an interesting source of cells, increasingly used in the treatment of various clinical contexts as well as for immunomodulation in conditions linked to auto/allo-immunity (1). These cells are self-renewing, adhere to plastic, express characteristic surface antigens and have mesodermal multilineage differentiation potential in vitro (2,3). MSCs can be obtained from several tissues such as adult bone marrow (BM), adipose tissue and several fetal organs. Ex vivo isolated somatic MSCs have been implicated in immune-regulatory functions on cells from both the innate and adaptive immune system. Several secreted factors such as indolamine 2,3-dioxygenase (IDO), transforming growth factor beta (TGF-β), hepatocyte growth factor, and prostaglandin E2 have been shown to mediate their capacity to inhibit T-cell activation [for review, see Ref. (1,4)]. However, cell-to-cell contact was also shown to be involved in the T cell-inhibitory effect of MSCs, for instance, through targeting cell surface ligands of the B7 super family (5,6). Nevertheless, a major restriction for their clinical use is due to the limited in vitro expansion of the low quantity of cells that can be collected from adult tissues. Furthermore, their full phenotypic identity in vivo remained to be established. Therefore, MSCs derived in vitro from human-induced pluripotent stem (huiPS) cells could fulfill some of the specification required to improve MSCs use in therapeutic approaches: well-defined and unlimited number of cells with reproducible functional characteristics. Several publications reported the generation of pluripotent cell-derived MSCs through embryonic body formation, direct differentiation, or addition of mesenchymal inductors (20)(21)(22)(23). These pluripotent cell-derived MSCs express the classical BM-MSC CD44, CD73, CD90, and CD105 markers are capable of in vitro differentiation into osteoblasts, adipocytes, and chondrocytes and display some tissue repair activity in vivo in mouse models (24). Furthermore, they present an immunosuppressive activity in vitro against T cells (25) as well as NK cells (26). The in vivo immunosuppressive activity of such cells was so far tested on murine immune cells in different models of immunological disorders such as allergic airways (27), experimental autoimmune encephalomyelitis (25,28), induced colitis (25), and ischemia (24). Here, we generated huiPS-MSCs (characterized by the expression of classical markers and their multipotent property) that display in vitro an efficient immunosuppressive activity on allogeneic T-cell responses through the induction of regulatory T (Treg) cell differentiation. We further demonstrate that their infusion in humanized NSG mice [human peripheral blood mononuclear cell (PBMC) mouse] induced a decrease in the proportion of human CD4 + and CD8 + T cells expanding within the mice, along with a switch from a Th1 cytokine profile toward a Treg signature. Our data highlight the promising therapeutic potential of huiPS-MSCs in immune-mediated diseases. MaTerials anD MeThODs cell culture All the culture products were provided by ThermoFisher (France) unless mentioned. In this study, the induced pluripotent stem (huiPS) cells were provided by Dr. I. Petit (INSERM U976, Paris) obtained from the reprogramming of human adult fibroblasts (29) or were produced in the laboratory (30). These cells were grown into homogeneous colonies on feeder mouse embryonic fibroblasts (MEFs) treated with mitomycin C (Sigma, France). The culture medium of huiPS cells consisted in 85% DMEM/ F12, 15% knockout serum replacement, l-glutamine 100 mM, β-mercaptoethanol 0.1 mM, and bFGF 10 ng/ml (Invitrogen or Peprotech, France). The huiPS cells were passaged one to two times per week by splitting colonies in dissociation buffer (DMEM containing Collagenase type IV 2 mg/ml) without detaching the feeder MEF. Human iPS-derived mesenchymal stromal cells (huiPS-MSC) were obtained by spontaneous differentiation of huIPS cells. For this, huiPS cells were maintained in huiPS medium without bFGF until the huiPS colonies overgrew. Without passaging them, the differentiating cells were maintained for the next 4-6 days in an "MSC" culture medium containing 30% DMEM, 30% F12, 10% serum FcII (Hyclone, ThermoFisher, France), NEAA 10 mM, Na pyruvate 1 mM, penicillin (1 U/ml)/streptomycin (1 µg/ml), glutamine 1 mM, β-mercaptoethanol 100 µM, ascorbic acid 50 µg/ml (Sigma-Aldrich, France), and huEGF 10 ng/ml. The differentiating cells were then dissociated in PBS 0.05% trypsin-EDTA, and put back in culture in the "MSC" medium. Only few cells collected (<10%) were able to survive and grow (medium was changed 1 or 2 days later to removed dead cells and floating cells). The "MSC" medium was then changed every 3-4 days. Ten to fifteen days later, the cells were passaged (passage 1) and analyzed for the expression of MSC markers (usually 80-90% of cells with MSC phenotype are recovered). These huIPS-MSCs were then maintained in culture and passaged one to two times per week in "MSC" medium. Although the huiPS-MSCs obtained could be maintained up to passage 10, they were used before they reached passage 5. Human PBMCs were obtained from the EFS (Etablissement Français du Sang) from healthy platelet donors. After separation on a Ficoll gradient, the cells were immediately used or frozen and stored in liquid nitrogen. In Vitro Differentiation of huiPs-Mscs Human-induced pluripotent stem-MSCs' differentiation into adipocytes and chondrocytes was performed using specific differentiation media (StemProAdipocyte medium and StemProChondrogenesis medium, ThermoFisher) according to the manufacturer's instructions. Osteoblast differentiation was performed in αMEM medium containing 5% Hyclone serum supplemented with ascorbic acid 50 µg/ml, β-glycerophosphate 110 µM, and dexamethasone 0.1 µM. After 14-21 days of culture and fixation, the cells were treated with either Alcian Blue 1% for coloration of chondrocyte matrix, Alizarin Red 2% for osteoblast-derived matrix or Red Oil for presence of lipid droplets as a marker of adipocytes. Mixed lymphocyte reaction (Mlr) Responder PBMC labeled with 0.4 µM CFSE (ThermoFisher) was stimulated by allogeneic manner in MLR with irradiated (40 Gy) PBMCs from two different donors labeled with 0.4 µM APC-Cell Tracer (eBioscience) and cocultured with or without irradiated (60 Gy) huiPS-MSCs (ratio 1 huiPS-MSC for 10 immune cells) in a u-bottom 96 well plates for 5 days. The cells then were collected and labeled with the anti-CD4 PeCy7 and anti-CD8 PerCP antibodies and then analyzed by flow cytometry. We excluded stimulator cells stained with the APC-Cell Tracer. The proliferation of CD4 + and CD8 + responder T cells was measured by the dilution of the CFSE marker. In some experiments, the PBMCs were put in the insert of a Transwell culture system (Nunc, 0.45 µm pores), the huiPS-MSC being put in the lower part. A rat anti-human blocking IL-10 antibody (clone JES3-19F1, Becton-Dickinson) and an isotype rat control were used (10 µg/ml) to inhibit the potential role of IL-10 in the immunosuppression. The In Vivo Model of human T cell expansion in nsg Mice NOD/SCID/IL2Rγ KO (NSG) mice were purchased from Charles River Laboratory. Animals were maintained in accordance with the general guidelines of the institute. They were injected ip with 10 × 10 6 human PBMC and were treated with or without 1 × 10 6 huiPS-MSCs by ip injection once a week for 3 weeks. In some experiments, mice received only the huiPS-MSCs. Mice were sacrificed between weeks 5 and 7 after PBMC injection. Peritoneal fluid, blood, and spleen were collected for flow cytometry analysis. Approval for the use of mice in this study was obtained from our local Institutional Ethic Committee for Laboratory Animals (CIEPAL-Azur, NCE/2013-102). elisa We determined by ELISA the concentrations of human IL-1a, IL-6, IL-8, IL-2, and IFNγ (Development kit, Peprotech) in MLR culture media according to the manufacturer's instructions. rna extraction and Quantitative real-time Pcr Total RNA was extracted and reverse transcribed (SuperScript II Reverse Transcriptase, Invitrogen), and real-time RT-PCR was performed on an StepOnePlus Fast real-time PCR system (SybR Green, Applied Biosystems) on triplicates as described. Results were normalized to the different housekeeping genes (ACTIN, GAPDH, and UBIQUITIN) on the same plate. Differences in gene expression were calculated using the 2 −ΔΔCt method. statistical analysis Results are presented as mean ± SD. The level of statistical significance was determined by the unpaired two-sample Student's t-test. p Values <0.05 were considered statistically significant. resUlTs characterization of huiPs-Mscs: Phenotype and In Vitro Multipotency Because previous differentiation protocols of MSCs from pluripotent cells involve many steps ranging from embryonic body formation to cell sorting or both (20,22,(32)(33)(34)(35)(36), we set up a simple two-step protocol that would generate rapidly mesodermal-derived cells ( Figure 1A). Cultured human iPS cells were kept in absence of bFGF for 7 days, followed by a change for an ectodermal/mesodermal medium (37) for the next 2 weeks. Eighty to ninety percent (at passage 1) and 100% (at passage 2) of the recovered cells ( Figure 1B) express surface antigens (Ags) known to be expressed by tissue-derived MSCs. They were positive for CD44, CD73, CD90, CD105, and HLA-ABC Ags but negative for the endothelial CD31 marker, the hematopoietic and immune related CD34, CD45 markers, HLA-DR antigens, and CD80 and CD86 co-stimulatory molecules ( Figure 1B; Figure S1 in Supplementary Material). Three different human iPS cell lines, prepared from different donors and from different tissues [skin fibroblast (29,38) or myoblasts (30)] behaved identically, confirming that this protocol is applicable for multiple human iPS cells. In addition, the cells recovered and further kept in culture were capable of differentiation into the classical mesenchymal-derived cells (osteoblasts, chondrocytes, and adipocytes) ( Figure 1C) when cultured with appropriate differentiation media, suggesting that the huiPS-MSC population contained multipotent cells and correspond to bona fide MSCs. Finally, the huiPS-MSCs we have generated secreted high and sustained amount of IL-6 and IL-8 cytokine/chemokine but low amount IL-1α (as tested by ELISA) (Figure 1D), a cytokine profile shared by tissue-derived MSCs and associated with their role in tissue repair (39,40). Altogether, these results highlight that the huiPS-MSCs generated with our protocol shared strong similarities with in vitro maintained MSCs derived from adult tissues (2, 3). In Vitro immunosuppressive activity of huiPs-Mscs on activated T lymphocytes Besides their multipotent characteristics, tissue MSCs display in vitro immunosuppressive activities. To test the immunosuppressive properties of the huiPS-MSCs, we analyzed their action on the proliferation of human T lymphocytes stimulated in an allogeneic manner (Figure 2A). The stimulation of PBMCs in MLR with allogeneic antigen-presenting cells (PBMC from a secondary donor-in here named alloAPC) resulted in CD4 + and CD8 + T-cell proliferation, which was significantly reduced in coculture with huiPS-MSCs. We further observed a significant reduction in the % of CD25-expressing CD4 + and CD8 + T cells, indicating a diminished proportion of activated T cells in coculture with huiPS-MSCs ( Figure 2B). Another activation marker (CD69) was on the contrary expressed on a higher % of CD4 + and CD8 + T cells ( Figure 2B). Known to be an early activation marker, its maintained expression might indicate that the immunosuppression required to be effective the early activation of the T cells. It is also expressed on memory T cells, which would suggest that the remaining T cells in the coculture might acquire such a "memory" phenotype. We also analyzed by RT-qPCR the relative expression of mRNAs coding for cytokines as well as some surface receptors involved in such immune reactivities ( Figure S2 in Supplementary Material). As expected, gene expression signature of activated T cell was clearly reversed upon exposure with huiPS-MSCs. Indeed, compared with the RNA expression by huiPS-MSCs alone or in coculture with unactivated T cells, the relative expressions of activated T-cell cytokines (IL-2, IFNγ, TNFα, and TNFβ) were reduced in the MLRs in the presence of huiPS-MSCs ( Figure S2A in Supplementary Material). Measured by ELISA (Figure 2C), the IL-2 and IFNγ production was indeed reduced in the cocultures with huiPS-MSCs confirming at the protein level the lower activation of T lymphocytes observed in such conditions. Furthermore, RNA expression of other genes coding for cytokines were affected ( Figure S2A in Supplementary Material): we observed an increased expression of inflammatory IL-1α and IL-1β, IL-6 (cytokines that were shown to sustain the immunosuppressive activity of MSCs), as well as TGF-β and LIF known for their immunosuppressive functions. The mRNA coding for the cytokine IL-10 was also overexpressed. However, its immunosuppressive function appeared to be not involved in the T-cell immunosuppression. Indeed, using anti-IL-10-blocking antibodies during the initial MLR in the presence of huiPS-MSCs did not affect the inhibition of the T-cell proliferation ( Figure S3A in Supplementary Material). Tested in transwell assay, we observed that the level of inhibition of the CD4 + T-cell proliferation was partially reduced ( Figure S3B huiPs-Mscs induced a switch in T cell Polarization Using intracytoplasmic cytokine detection by flow cytometry in activated T lymphocytes, we observed a dramatic reduction in the proportion of CD4 + T lymphocytes producing IFNγ and TNFα corresponding to Th1 cells in MLR with huiPS-MSCs ( Figure 3A). Furthermore, we detected the presence of CD4 + Treg cells among the T populations recovered from the cocultures. As shown in Figure 3B, the MLR without huiPS-MSCs generated only a small amount of FoxP3 + CD4 + T cells, while in the presence of huiPS-MSCs this percentage strongly increased up to 16%. FoxP3 being susceptible to reflect human T-cell activation, we confirmed the presence of CD4 + Treg cells since we observed an increased detection of a population of CD4 + T cells expressing a high level of the CD25 and a low level of CD127 markers ( Figure 3C). This CD4 + CD25 hi CD127 lo T-cell population further contained a higher proportion of FoxP3 cells ( Figure 3C). To better define the phenotypic characteristic of the Treg cell population induced in the presence of huiPS-MSCs, we showed that the neuropilin1 (Nrp1) surface marker was not particularly expressed by such Treg cells and that the Ikaros family member Helios was clearly highly expressed in the whole CD4 + T cells population (reflecting a global activation) ( Figure S4 in Supplementary Material). Finally, this CD4 + T-cell population recovered from the cultures in the presence of the huiPS-MSCs was then assayed in a secondary MLR. As shown in Figure 3D, this CD4 + T-cell population containing Treg cells was very efficient to inhibit CD4 + T-cell proliferation. Indeed, we observed 50 and 70% inhibition of cell proliferation when the ratios of Treg cells over CD4 + T cells were 1 for 3 and 1 for 2, respectively. We thus demonstrated that the FoxP3 + CD4 + T cells generated during the coculture with the huiPS-MSCs are immunosuppressive CD4 + Treg cells. Altogether, these results highlight the immunosuppressive activity in vitro of the huiPS-MSCs on T-cell stimulation that induces a switch in T-cell cytokine polarization and the generation of Treg cells. In Vivo suppressive activity of huiPs-Mscs We further tested the immunosuppressive activity of huiPS-MSCs in a model of human T-cell expansion in immune-deficient NSG mice (41). First, we determined whether ip-injected huiPS-MSCs could be detected in different compartments in NSG mice. HLA-ABC + huiPS-MSCs were labeled with CFSE and injected ip in NSG mice. Despite a lower level of expression of CD73 on recovered cells, we showed that CFSE + HLA-ABC + huiPS-MSCs could be detected up to 7 days after injection not only within the peritoneal cavity but also among the circulating cells and splenocytes (Figures 4A,B). This indicated that the injected huiPS-MSCs remained viable for at least 7 days within these mice and were able to circulate at least up to the spleen. FigUre 3 | Switch in T-cell effector function induced by human-induced pluripotent stem (huiPS)-mesenchymal stromal cells (MSCs). (a). Analysis by flow cytometry of intracytoplasmic production of Th1 cytokines (IFNγ and TNFα) performed on CD4 + T lymphocytes after allogenic stimulation without (no) or with huiPS-MSCs cocultures. The histogram represents the % of CD4 + T cells expressing INFγ (mean % ± SD and p value calculated from three independent experiments). (B) Detection of CD4 + FoxP3 + regulatory T (Treg) cells after mixed lymphocyte reaction (MLR). The dot plots show the % of FoxP3-expressing CD4 + T cells determined by flow cytometry after specific intranuclear staining of FoxP3 performed on CD4 + T lymphocytes after allogenic stimulation without (no) or with huiPS-MSCs cocultures. The histogram represents the mean % ± SD with the corresponding p values calculated from three independent experiments. (c) The proportion of CD4 + T cells expressing high level of CD25 (CD25 hi ) and no or low level of CD127 (CD127 lo ) was determined by flow cytometry. Further gated on this population, the level of FoxP3 expression was analyzed. The histogram represents the mean % ± SD with the corresponding p values calculated from two independent experiments. (D) Treg cells obtained in the MLR in the presence of huiPS-MSCs are immunosuppressive in vitro. Mix lymphocyte reactions (with different cell ratios) were realized between activated CFSE-labeled CD4 + T cells and the CD4 + Treg cell containing population obtained from previous coculture of T cells with huiPS-MSCs. The left panel shows a representative graph indicating the level of CFSE dilution (i.e., proliferating CD4 + responding T cells) at three different ratios after 4 days in culture and analyzed by flow cytometry. The right panel displays the proportion of proliferating CD4 + T cells expressed in percentage of proliferation of CD4 + T cells in the absence of the Treg cell containing population. The histogram represents the mean % ± SD with the corresponding p values between ratio 0/1 and either ratio 1/3 or 1/2 calculated from three independent experiments. In a second step, we tested their impact in vivo on the expansion of human T cells. NSG mice were injected with human PBMC, were then treated or not with three infusions of huiPS-MSCs (through ip injection at 1-week intervals), and were sacrificed between weeks 5 and 7. After sacrifice, cells collected from the peritoneal cavity, those circulating in the blood and those present in the spleen were analyzed by FACS analysis (not shown and Figure 5) on the basis of expression of the human CD45 Ag. Within the peritoneal cavity, human CD45 + cells represented about 65% of total cells recovered from control or huiPS-MSCstreated mice; more than 95% of them were CD3 + T cells (not shown). This indicated that among the injected PBMC, T cells were the main human cell population able to expand and colonize the mice. Accordingly, we observed in the blood and spleen that more than 90% of human CD45 + cells were CD3 + T cells with expected proportion of CD4 + and CD8 + cells, indicating a similar rate of expansion. But when mice were treated with huiPS-MSCs, the percentage of circulating human T cells was significantly reduced about 1.8-fold (Figure 5A) leading to a 1.6-fold reduced accumulation of total T cells within the spleen (Figure 5B). Interestingly, the proportion of both CD4 + and CD8 + T cells was changed, the CD8 + T cells being significantly more affected by the huiPS-MSCs treatments (Figure 5B). Intracytoplasmic labeling of cytokines produced by the human T cells recovered from the spleen was performed. We showed that untreated mice displayed high percentages of human inflammatory IFNγ + TNFα + Th1 cells, while little or none produced the anti-inflammatory IL-10 cytokine (Figure 6A). By contrast, in mice treated with the huiPS-MSCs, the proportion of Th1 cells was substantially reduced, while the one of T cells producing IL-10 was increased more than sixfolds. In parallel, FoxP3 + CD3 + Treg cells were absent in non-treated animals whereas they were systematically detected in huiPS-MSCs injected mice ( Figure 6B). Altogether, these data demonstrated that the huiPS-MSCs were able to limit the human T-cell expansion in vivo, along with a reduced Th1 inflammatory cytokine profile, the presence of IL-10-producing T cells and the generation of FoxP3 + Treg cells. DiscUssiOn Mesenchymal stromal cell clinical use, through many therapeutic protocols, has proven its safety and efficacy in the treatment of various degenerative diseases and tissue injuries. Since the pioneering work by Takahashi et al. on the derivation of induced pluripotent stem cells (42,43), tremendous progresses enables to imagine the future safe clinical applications of such cells in regenerative medicine. In this study, huiPS cells were used to efficiently generate stromal mesenchymal cells (huiPS-MSCs) with a simple spontaneous differentiation method. As other studies evaluating the properties of MSCs derived from pluripotent cells (21), we confirmed that the cells we obtained exhibit the morphologic, phenotypic, and immunomodulatory characteristics assigned to adult tissues-or cord blood-derived MSCs (4). They fulfill the defined standards attributed to in vitro-expanded MSCs derived from BM with the lack of expression of specific hematopoietic and endothelial cell markers, and as expected, they express the MSC-identifying markers CD73, CD90, and CD105 (3). As their ex vivo counterpart, the huiPS-MSC population we generated is able to give rise, in appropriate culture conditions, to osteoblasts, chondrocytes, and adipocytes revealing similar multipotent property. Furthermore, they produced IL-6 and IL-8, cytokines known to be associated with MSC tissue repair potential (39,40). Among the different approaches used to differentiate pluripotent cells into MSCs, either embryonic bodies formation or the use of flow cytometry cell sorting might complicate the largescale production of such cells for industrial or clinical applications. The protocol we set up allows to generate huiPS-MSCs in 2 weeks from confluent iPS cell cultures. This simple protocol, fast and not expensive, revealed to be efficient and reproducible. Nevertheless, the cellular and molecular mechanisms involved in the in vitro differentiation of MSCs from pluripotent stem cells in our hand and in many of the published studies remained to be characterized. Some insights were recently provided indicating that MSCs could be derived from hES cells via a trophoblast-like intermediate state (25) or increased through the inhibition of the IKK/NF-κB signaling pathway (23). Such studies provide new tools not only to generate efficiently MSCs in vitro but also for a better understanding of the in vivo origin of MSC populations. We also analyzed in vitro some of the mechanisms of immunosuppression exhibited by the huiPS-MSCs cells we have generated. Many in vitro and in vivo studies reported the potent immune modulating functions of tissue-derived MSCs through action on different types of immune cells, activated through variable means (1,4). Concentrating on human T cells in our study, stimulation of PBMC by MLR in coculture with the huiPS-MSCs resulted in a dramatically decreased proliferation of both CD4 + and CD8 + T lymphocytes and a concurrent decrease in IL-2 and IFNγ production, both cytokines associated with inflammatory activation of T cells. The transcriptomic analysis we performed gave some insight into the potential mechanisms involved in the immunosuppression. While we observed a contradictory increased expression of IL-1α, IL-1β, and IL-6, considered as "inflammatory" cytokines, those were shown to be necessary sustain the immunosuppressive activity of MSCs (1) and could therefore participate to the overall immunosuppression. We also noticed a diminished expression of co-stimulatory molecules involved in the T-cell activation and in their polarization into effector cells such as OX40L (44) and CD47 (45). On the contrary, those of LAG3 (46,47) and CTLA4 (48,49), both described for their potent immunosuppressive functions on T cells, were increased in the MLR in the presence of huiPS-MSCs. Likewise, the higher RNA expression of IL-10, TGF-β, and LIF, well-known strong immunosuppressive cytokines (1,50), strengthens the immunosuppressive action on T cells by the huiPS-MSCs. To be noticed, even if the level of RNA expression for PD-L1 (B7H1, CD274), one other well-known immunosuppressive molecule appeared reduced upon exposure of activated PBMCs to huiPS-MSCs (compared with activated PBMCs), the involvement of this pathway is clearly engaged (not reported in here). Altogether, our data point out the package of multiple mechanisms contributing to inhibit the T-cell immune system used by the huiPS-MSCs similarly to tissue somatic MSCs. Finally, among the molecular pathways able to impact effector T cells, the huiPS-MSCs we have generated were able to induce in vitro the differentiation of functional CD4 + Treg cells expressing FoxP3 as well as CD25 at high level and not expressing CD127 (51), at the expense of IFNγ + TNFα + inflammatory T cells. Treg cells were shown to depend on TGF-β signaling for maintenance of their immunosuppressive function (52). Such mechanism may be involved in our hand as the huiPS-MSCs produced a very high amount of TGF-β (not shown). It remains to determine whether such a Treg cell population is indeed induced in vitro or simply expanded from natural Treg present within the PBMC used (53). The expression of the Neuropilin1 or the Helios markers could be used to discriminate between natural and induced Treg cells (54)(55)(56)(57). However, this remains controversial (57)(58)(59)(60). The population we generated did not expressed Neuropilin1, consistent with an induced phenotype. Regarding Helios expression, it was highly expressed on the Treg cells tested. But Helios has also been described to be highly expressed on induced Treg cells and is a marker of T-cell activation and proliferation (61,62). Nevertheless, the presence of cytokines such as IL-2 (which level is inhibited but not abrogated), TGFβ, and IL-6 in the MLR we have performed with huiPS-MSCs suggest that the Treg cells obtained could be induced in vitro (63). Interestingly, because of these different characteristics, one might consider the clinical use of huiPS-MSCs in strategies aiming at inducing immune tolerance toward other human iPS cell type derivatives. Indeed, the clinical use of autologous iPS-derived cells might be compromised by the overall genetic instability generated during the reprogramming of epigenetic defined somatic cells. Allogenic, genetically stable, iPS cell line banks could be the clinical alternative provided that the challenge of immune rejection of transplanted allogenic cells would be resolved. As discussed recently by Liu et al. (64), some proposed strategies that could be considered, involve known tolerogenic pathways through forced expression of CTLA4-ig and PD-L1 by iPS-derived cells. Because they are able to generate Treg cells and probably innately use CTLA4/CD28 as well as PD-L1/PD1 axes on T cells to block their reactivity [as does the tissue MSCs-(1, 4)], huiPS-MSCs could be included within the arsenal of tools used to promote immune tolerance as an associated cell type proposed along with the therapeutic transplanted iPS-derived cells. To the best of our knowledge, this study is the first evaluating the huiPS-MSCs immune-regulatory properties on human T-cell responses in vivo through the potential generation of Treg cells. The model of humanized NSG mouse allows to evaluate the state of activation of human T cells recovered from different organs (peritoneal fluid, blood, and spleen) after treatment with the huiPS-MSCs. Indeed, injection of human PBMC in such immunocompromised mice led to the expansion, circulation, and accumulation (within the spleen) of human activated CD3 + T cells, other CD45 + cells (such as B lymphocytes, NK cells, or monocytes) being barely detected within the different tissues. Furthermore, NSG mice were not irradiated before human cell injections [as done in many NSG models (65)(66)(67)] to avoid possible repair mechanisms of huiPS-MSCs on irradiated tissues that could interfere with their immune suppressive action on T cells. Finally, the huiPS-MSCs were injected ip instead of iv to prevent possible pulmonary embolization known to lead to massive secretion into the blood of immune modulating factors such as TSG6 (68). This ip mode of injection did not confine the huiPS-MSCs to the peritoneal cavity since we were able to detect them over 7 days in the blood stream as well as in the spleen. These results indicate that huiPS-MSCs were able to migrate to different area, colonized as well by human activated T cells, where they might exert their immunosuppressive functions. Using this model setup, we confirmed in vivo the inhibitory action of huiPS-MSCs cells on the proliferation of T cells since we observed a decreased expansion of the CD3 + T-cell population in huiPS-MSC-injected mice. Even if this impact appears to be more pronounced on the CD8 + than CD4 + T-cell populations, both T-cell populations were affected. Interestingly, in the case of allogenic stem cells transplantation, MSC-treated patients presented a higher level of IL-2 in their serum and a higher Th2 cytokine (IL4) profile at the expense of the Th1 cytokine (IFNγ) profile (18). Our results indicate that treatment with the huiPS-MSCs induces a switch from a Th1 signature toward a regulatory signature (with increased IL-10 production) and the generation of FoxP3 + Treg cells. Nevertheless, not all clinical studies using MSC infusion in human reported such clear shifts in T-cell responses, possibly due to different clinical settings. Our results highlight that the induction of Treg cells may be a substantial mechanism by which huiPS-MSCs and probably adult MSCs exert their function in vivo. Interestingly, Gregoire-Gauthier et al. (69) reported that cord blood-derived MSCs were able to delayed clinical sign of acute GVHD in irradiated NSG mice through healing process and immunomodulation not related to Treg cells. This supports that the function of MSCs may also differ depending on the experimental settings and this might explain some of the contradictory results of clinical studies. Co-transplantation of MSCs during allogeneic hematopoietic stem cell transplantation has been explored to enhance engraftment and decrease the risk of graft-versus-host disease (GVHD). However, although several preclinical and clinical studies with MSCs have been conducted, the results have been mixed and the efficacy of MSCs in a transplantation setting is so far unclear (14)(15)(16)(17)(18)(19)70). For such clinical studies, the MSC source and their characteristics should be clearly defined to induce reproducible responses on T immunosuppression. Such standardization could be potentiated with in vitro generated MSCs, and our simple method could therefore be very useful. To the best of our knowledge, our results represent the first demonstration that immune-modulatory huiPS-MSCs act on human T lymphocytes in vivo through a switch from a Th1 inflammatory differentiation pathway to a Treg cell pathway. These findings may promote the development of new strategies, involving pluripotent stem cells and their derived cells, for the induction of specific immune tolerance. eThics sTaTeMenT This study was carried out in accordance with the recommendations of our local Institutional Ethic Committee for Laboratory Animals (CIEPAL-Azur, NCE/2013-102), France. The protocol was approved by the C3M animal core facility committee (INSERM U1065, Université de Nice, France). aUThOr cOnTriBUTiOns CR: collection and/or assembly of data, data analysis and interpretation, manuscript writing, and final approval of manuscript. GS, JP, and LI: collection and/or assembly of data, data analysis and interpretation, and final approval of manuscript. NB, GD, CV, and AB: collection and/or assembly of data and final approval of manuscript. NM and AW: data analysis and interpretation and final approval of manuscript. CB-W: data analysis and interpretation, financial support, and final approval of manuscript. MR: conception and design, financial support, collection and/ or assembly of data, data analysis and interpretation, manuscript writing, and final approval of manuscript.
v3-fos-license
2024-02-27T16:58:34.356Z
2024-02-22T00:00:00.000
268019651
{ "extfieldsofstudy": [], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "https://sciscitatio.ukdw.ac.id/index.php/sciscitatio/article/download/164/75", "pdf_hash": "5b8f515d98ac86f002b9b4f039ccb3f8bff1b336", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1405", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "9e9bf86397c8322326e2a9251d10d6d7a82d2c6e", "year": 2024 }
pes2o/s2orc
The Effect of Gamal Leaf ( Gliricidia sepium (Jacq.) Kunth ex Walp )-based Liquid Organic Fertilizer on The Vegetative Growth of Lettuce ( Lactuca sativa L.) The growth of lettuce depends on the interaction of growth and environmental conditions. Improper crop maintenance may cause low yield of lettuce production. Application of liquid organic fertilizer could be performed as a strategy in crop maintenance. This study aimed to determine the effect and concentration of gamal leaf ( Gliricidia sepium (Jacq.) Kunth ex Walp)-based liquid organic fertilizer (LOF) on the vegetative growth of green lettuce plants ( Lactuca sativa L.). The research method used a non-factorial Randomised Group Design (RGD), with treatments consisted of P0 (control), P1 (20% gamal leaves-based LOF), P2 (40% gamal leaves-based LOF), and P3 (60% gamal leaves-based LOF). The results showed that Gamal leaves-based liquid organic fertilizer (LOF) that is produced in this research might still contain less macronutrients C, N, P and K that is stated by SNI 7763: 2018 (2-6%), but its application on lettuce as tested plants could still support their growth. Dose of P1 (20%) was the best to support lettuce growth in the form of increased plant height and leaf area index. Dose of P2 (40%) was abble to maintain showed minimum decrease on lettuce total chlorophylls content. A pplication of dose of P2 (40%) on lettuce growth medium supportes highest uptake of N while the application of dose of P3 (60%) showed highest uptake of P. Introduction Lettuce (Lactuca sativa L.) is a plant belonging to the Asteraceae family.This plant is a horticultural commodity that has quite good prospects and commercial value.The increasing population of Indonesia and the increasing awareness of the population's nutritional needs have led to a demand for vegetables.Lettuce plants in Indonesia are planted from the lowlands to the highlands, taking into account the selection of varieties that are suitable for the place where they grow.This plant is a seasonal vegetable originating from West Asia and America (Afsari et al., 2020).Raw lettuce contains calcium, phosphorus, iron, vitamin A, vitamin B and vitamin C and is very beneficial for body health (Ahmed et al., 2020) Lettuce leaves are rich in antioxidants such as beta-carotene, folate and lutein which are effective in protecting the body from cancer.Its natural fiber content can maintain the health of digestive organs (Raras et al., 2018).According to data from Central Bureau of Statistics Indonesia (2021) lettuce production in 2021 reached 55,710 tons/ha, and decreased to 47,920 tons/ha in 2022.Several reason can be pointed out as the cause of the decline such as reduction in plantation area, bad plant varieties, insufficient nutrients in the soil, and a climate that is not suitable for plant growth.Biologically, organic fertilizer is the most important energy provider for the activity of soil microorganisms.Providing organic fertilizer stimulates the proliferation of microorganisms and increases nutrients for plants (Fauziah et al., 2022).One of the plants that can be used as the material of organic fertilizer preparation, especially liquid organic fertilizer (LOF) is Gamal (Gliricidia sepium (Jacq.)Kunth ex Walp). G a m a l i s a p l a n t b e l o n g i n g t o Leguminosae family which its leaves can be used as a raw material of LOF preparation.During their growth, Gamal has a capability to absorb and fix N elements from its surroundings (Suparman et al., 2022).Gamal plants contain various nutrients which are high enough for plant growth.Gamal leaves contain 3.15% N, 0.22% P, 2.65% K, 1.35% Ca and 0.41% Mg so its biomass is used to improve the physical and chemical properties of soil (Widya et al., 2021). According to Sumaryani et al., (2018) application of Gamal leaf-based LOF in concentration of 40% increased number of leaves and stem height of Tomato plants.Meanwhile, according to Peni et al., (2021) concentration of 30% Gamal leaves-based LOF supported the growth of mustard greens. The aim of this research was to determine the effect and concentration of Gamal leaves (Gliricidia sepium (Jacq.)Kunth ex Walp)-based LOF on the vegetative growth of green lettuce plants (Lactuca sativa L.). Place and Time of Experiment This research was conducted from December 2022 to February 2023 in Rumah Berastagi Village, Karo Regency, North Sumatra.Measurement of leaf chlorophyll levels, N and P nutrient uptake, and soil analysis were carried out in the laboratory of the Faculty of Agriculture, University of North Sumatra. The materials used in this research were lettuce seeds, liquid organic fertilizer (LOF) prepared from gamal leaves, EM4 (Effective Microorganisms) as microbes inoculum, white sugar water and coconut water.All research materials were purchased from agricultural stores in Karo Regency, North Sumatra.The tools used in this experiments are a hoe, knife, meter, bucket, gembor, stationery, ruler, camera, pH, thermohydrogrometer and UV-Vis spectrophotometer. Research Design This research used a non-factorial R a n d o m i z e d G r o u p D e s i g n ( R G D ) e x p e r i m e n t , t h e t r e a t m e n t w a s t h e administration of gamal leaves-based LOF with 4 treatments (Table 1). Components of this experiment is summarized as follows: 4 number of treatment, triplicates per treatment, 12 research plots, plot size of 1 m x 2m, space between plant was 50 cm x 60 cm, 6 sample plants per plot, total 72 plants used in this study, spacing between research plots was 30 cm, and spacing between replicates : 30 cm Measured Parameters of Experiment Experiment data was collected 2 weeks after planting lettuce at intervals of 2 week after planting(WAP) until the last measurement at week 4. Parameter of lettuce growth and its related data that were measured consisted of plant height (cm), leaf area index (cm), leaf chlorophyll levels, N and P uptake, initial and final soil analysis. Plant height was measured from the base of the plant stem to the highest growing point of the plant observed using a meter.Data collection of plant height was carried out once a week for 4 weeks. Leaf area index (LAI) observations were carried out at 2 WAP and 4 WAP.Data of LAI was obtained from the comparison of total leaf area with total area.According to Susi (2018) the formula used was: Data collection of the amount of leaf chlorophyll were carried out of 2 WAP and 4 WAP.Measurement of N and P nutrient uptake were carried out at 1 WAP and 4 WAP. Initial and final soil analysis were carried out at the Agricultural Laboratory of University of North Sumatra. Production of Gamal leaves (Gliricidia sepium (Jacq.) Kunth ex Walp)-based Liquid Organic Fertilizer (LOF) Approximately 10 kg of young Gamal leaves were collected and washed.Leaves were finely sliced, put into a container and filled with 20 liter of water Four liter of coconut water, 1 liter of microbe inoculum (Effective Microorganism (EM4), and 1 kg of white sugar were added to the mixture and stirred well.Mixture was fermented for 14 days before being used as fertilizer tested plant. Data analysis The data studied was tested using the Univariate Analysis test ANOVA with the help of the SPSS application and if the variance has a significant effect then the test uses analysis of the mean value using Duncan's test treatment. Appearance and Nutrient Contents of Gamal leaves-based Liquid Organic Fertilizer (LOF) The Gamal leaves-based LOF that was produced in this syudy had a brownish green colour and a pungent smell (Figure 1).Analysis of macronutrient content of Gamal leaves-based LOF is shown in Table 2. The level of macronutrient C,N,P, and K had not yet meet the requirement of standard LOF as stated by SNI 7763: 2018.Low level of macronutrients detected in The Gamal leaves-based LOF is thought to be related to the length of fermentation time required by microorganisms in breaking down organic matter in fertilisers.This is in line with the statement (Utami & Syamsuddin, 2021), which states that the content of nutrients that does not meet SNI 7763: 2018 is due to insufficient time for microorganisms to break down organic matter in compost. Lettuce Growth Parameters After Application of Gamal leaves-based Liquid Organic Fertilizer (LOF) Application of Gamal leaves-based LOF on the growth of lettuce was measured on its height, leaf area index, leaf chlorophyll level, and macronutrients N and P uptake. Direct measurement result of lettuce growth after applied with Gamal leaves-based LOF which consist of plant height and leaf area index is presented in Figure 2 and 3. Aplication of gamal leaves-based LOF in the final period of research (week 4) showed that the dose of P1 (20% of Gamal leaves-based LOF) was the best dose to support lettuce growth in the form of increased plant height and leaf area index (Figure 2 and 3). Lettuce leaves content of chlorophyll and its macronutrients nitrogen (N) and phosphor (P) uptake after aplication of Gamal leavesbased LOF are presented in Table 3 and 4. Concentration of lettuce's leaf total chlorophylls were decreased on weak 4 if compared to week 2 of all tested plants treated with Gamal leaves-based LOF.Lettuce on P2 treatment (40% of Gamal leaves-based LOF) showed minimum decrease on its total chlorophylls level on week 4 compared to other LOF treatments (Table 3). Uptake of N and P by lettuce at the end of research period tended to occur in different ways.Uptake of N tended to decrease at week 4 if compared to week 2, while uptake of P tended to be stabilized from week 2 to week 4.This phenomenon were generally observed on all tested lettuce fertilized with Gamal leaves-based LOF.Compared to all treatments of Gamal leaves-based LOF, application of 40% dose (P2) on lettuce growth medium showed highest uptake of N at the end of research period (week 4), while the application of 60% dose (P3) on lettuce growth medium showed highest uptake of P at the end of research (week 4) Macronutrient Content of Initial and Final Soil Result of macronutrients content of initial and final (soil condition at the end of research period) soil used as lettuce growth media in this research is presented in Table 5, as well as pH value of soil.Level of macronutrients N and P, but not K, were lower in final soil compared to initial ones. These observed result might be cause by the use of those macronutrients by lettuce in order to fullfill their daily need for growth. Addition of Gamal leaves-based LOF did not exactly increase the level of macronutrients N, P, and K in soil because their preliminary level were already low as detected in LOF.The value of soil pH were decreased as observed in final soil compared to its initial condition.Note: numbers in rows and columns followed by the same letter do not have a significant effect according to Duncan's advanced test at the 5% level. Discussion Gamal leaves-based liquid organic fertilizer (LOF) that is produced in this research might still contain less macronutrients C, N, P and K, but its application on lettuce as tested plants could still support their growth. After 4 week of planting, it was observed that bioparameters of lettuce, such as plant height and leaf area, were increased. Aplication of 20% of gamal leavesbased LOF (P1) was the best dose to support increased plant height (Figure 2).Plant will use available macronutrients such as C, N, P and K from its growth medium as raw material for their metabolism.Growth of lettuce in this reesearch was influenced by the avilibility of macronutrients in its media.Plant height increase, including lettuce, is supported by the uptake of nutrients P and K from the growth medium, which are necessary for carrying out the physiological and metabolic processes of the plant. Result of Oviyanti et al. (2016) mentioned that the higher the concentration of LOF made from Gamal leaves, the better the plant condition will be without interfering with plant growth and metabolic processes.Furthermore, studies by Triadiawarman (2019) on mustard andMilla (2023) on papaya stated that the use of gamal leaves-based LOF affects the increase of those plants height.Our result showed that the lowest dose of gamal leaves-based LOF (20%) was already capable to support increase of lettuce height.It is then asumed that the low dose of LOF is sufficient to support vegetative growth of plant insluding lettuce. Aplication of 20% of gamal leavesbased LOF (P1) was also better to support the increased of lettuce leaf area compared to other doses (Figure 3).Growth of lettuce leaf area in this research was calculated and stated in the form of leaf area index (LAI). T h e s i z e o f t h e l e a v e s g r e a t l y influences plant metabolism, especially the photosynthesis process.Macronutrients that is contained in soil and fertilizer playimportant role to stimulate plant growth, including increasing on leaf area. Liquid organic fertilizer (LOF) made from gamal leaves is known to have a high enough nitrogen content to provide enough elements during plant growth, so the photosynthesis process is active so that cell division, elongation and differentiation go well.The results of photosynthesis that are converted during respiration will produce energy for cell division and enlargement activities which causes the process of leaf growth to become longer and wider (Milla, 2023). According to Oviyanti et al (2016), the presence of nitrogen in gamal leaves-based LOF can accelerate the photosynthesis process and cause the formation of leaf organs to occur faster.Plants that do not receive appropiate N nutrient for their daily needs will be stunted in growth and produce smaller leaves, whereas plants that receive sufficient N will be taller and have wider leaves.Another research from Qoniah (2019) mentioned that the use of LOF made from Gamal leaves had a real effect on lettuce leaves.The increase in leaf width and area is due to the meristem capability to proliferate and produce a number of new cells.Growth of leaf is influenced by hormones to regulate growth, water for the turgidity of leaf tissue cells, as well as the amount of nutrients N, P and K. As observed in the high increased of lettuce height that is applied with low dose of gamal leaves-based LOF (P1), similar effect of this low dose is noted on lettuce LAI increase.Both of these result may support the idea tha application of low dose of gamal leaves-based LOF is sufficient in contributing on plant growth, inculding lettuce. Total chlorophyll content of lettuce leaves measured in this research show that its concentration was decreased on weak 4 if compared to week 2 of all tested plants treated with Gamal leaves-based LOF.Lettuce on P2 treatment (40% of Gamal leaves-based LOF) showed minimum decrease on its total chlorophylls content on week 4 compared to other LOF treatments (Table 3). Application of organic fertilizer that is made from gamal leaves had an effect on the amount of chlorophyll in lettuce leaves.This effect is dependent on one main factor namely nitrogen (N).Macronutrient N is a nutrient needed by plants, one of which is Nisa et al. in the formation of chlorophyll.Plants that lack of N will show chlorosis on its leaves. A research conducted by Efendi (2022) mentioned that application of LOF made from gamal leaves had a significant effect on the increased amount of chlorophyll in mustard plant leaves.As presented in Table 3, our result showed that total chlorophylls content of all lettuce that were treated with Gamal leaves-based LOF show a decrease at the end of research period (week 4).Between those LOF treatments, the dose of P2 (40% of Gamal leaves-based LOF) showed minimum decrease on its total chlorophylls content on week 4.The decrease of chlorophylls content of lettuce leaves in this research might be cause by aging of the plant.Shi (2019) mentioned that the descrease of chlorophylls content may also dependent on plant stages of growth.After a peak of growth, chlorophyll content will gradually decreased and tended to be flat.The decrease of chlorophylls content may relate to plant respon to its surrounding environmental factors that cause stress condition (Liang et al, 2017). During its life span, plant needs to provide material for its metabolism by absorbing nutrients from its surrounding.Plant needs of macroinutrients such as C,N,P,K were provide by its initial growth medium or gained after the enrichment with fertilizer.In this research, lettuce absorbed macronutrient from its medium that is already enriched by the application of Gamal leaves-based LOF.Uptake of N and P by lettuce at the end of research period tended to occur in different ways.As observed, Uptake of N tended to decrease at the end of research period, while uptake of P tended to be stabilized during research period.These results may indicated that lettuce needs for N is higher than its need on P. Nitrogen plays important role in the growth of lettuce, especially for supporting the growth of it leaves. Between all treatments of Gamal leavesbased LOF on lettuce, application of 40% dose (P2) on lettuce growth medium showed highest uptake of N at the end of research period (week 4), while the application of 60% dose (P3) on lettuce growth medium showed highest uptake of P at the end of research (week 4). Based on our result on the comparation of initial and final soil macronutrients level (Table 5), the concentration of N and P, but not K, were lower in final soil compared to the initial one.Decrease of N and P in soil that was used as lettuce growth medium could be caused by the use of those macronutrients by lettuce in order to fullfill their daily need for growth.According to Tando, (2019), nitrogen in plants has an important role in encouraging plant growth through increasing the number of tillers, developing leaf area, and protein synthesis.Plants that lack nitrogen elements in the soil will cause the leaves to turn yellowish and start from the tip and then spread to the middle of the leaf blade.Furthermore, as mentioned by Rianditya and Hartatik (2020), phosphor (P) in plants plays role in conserving and transferring energy in the form of ADP and ATP, and fulfilling the need of P will increase chlorophylls biosynthesis. Our gamal leaves-based LOF was detected to have 0.19% organic carbon ( C -o r g a n i c ) , 0 . 1 0 % n i t r o g e n , 0 .0 3 % phosphor (P), and 0,20% kalium (K).Those macronutrients concentration does not yet meet the standards of LOF as stated in SNI 7763: 2018 (2-6%).Several effort could be done to increase the quality of LOF, such as addition of organic materials that are rich of N, P, K or refining the fermentation process to endorse better decomposition of complex organic material to their simple components. Conclusion Gamal leaves-based liquid organic fertilizer (LOF) that is produced in this research might still contain less macronutrients C, N, P and K that is stated by SNI 7763: 2018 (2-6%), but its application on lettuce as tested plants could still support their growth.Aplication of dose P1 (20%) of gamal leaves-based LOF was the best to support lettuce growth in the form of increased plant height and leaf area index.Total chlorophylls content of all lettuce's leaf treated with Gamal leavesbased liquid organic fertilizer (LOF) were decreased at the end of research period, but lettuce treated with dose of 40% (P2) showed minimum decrease on its total chlorophylls Uptake of N tended to decrease at the end of research period, while uptake of P tended to be stabilized during research period.Compared to all treatments of Gamal leaves-based LOF, application of 40% dose (P2) on lettuce growth medium showed highest uptake of N at the end of research period, while the application of 60% dose (P3) on lettuce growth medium showed highest uptake of P at the end of research. Figure 3 . Figure 3. Leaf Area Index of lettuce that was treated with Gamal leaves-LOF on week after planting 2 and 4 Table 1 . Treatment and Replicate used in the Experiment Table 2 . Quality analysis of gamal leaf LOF Table 4 . N and P nutrient uptake of lettuce that was treated with Gamal leaves-LOF on week after planting 2 and 4 9900a 0.21900 a 2.8000 a 0.31333 a P1 3.3133 a 0.24667 a 0.6367 a 0.23333 a P2 3.5933 a 0.25033 a 2.3800 a 0.24333 a P3 3.5467 a 0.20933 a 2.6133 a 2.07667 bNote: numbers in rows and columns followed by the same letter do not have a significant effect according to Duncan's advanced test at the 5% level. Table 5 . Macronutrient content of initial and final soil used as media Table 3 . Leaf chlorophyll content of lettuce that was treated with Gamal leaves-LOF on week after planting 2 and 4
v3-fos-license
2024-07-23T13:07:09.507Z
2024-07-22T00:00:00.000
271325711
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "ce4da8206373024ad3f27a0641e0ef9105b717c9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1406", "s2fieldsofstudy": [ "Education", "Medicine", "Psychology" ], "sha1": "59380222a10e60df274e02e24e86f49948d572c6", "year": 2024 }
pes2o/s2orc
Health simulation through the lens of self-determination theory — opportunities and pathways for discovery Health simulation is broadly viewed as an appealing, impactful, and innovative enhancement for the education and assessment of health professions students and practitioners. We have seen exponential and global growth in programmes implementing simulation techniques and technologies. Alongside this enthusiasm and growth, the theoretical underpinnings that might guide the efficacy of the field have not always been considered. Many of the principles that guide simulation design, development and practice have been intuited through practical trial and error. In considering how to retrofit theory to practice, we have at our disposal existing theories that may assist with building our practice, expertise, identity as a community of practice, authority and legitimacy as a field. Self-determination theory (SDT) is an established and evolving theory that examines the quality of motivation and human behaviours. It has been applied to a variety of contexts and provides evidence that may support and enhance the practice of health simulation. In this paper, SDT is outlined, and avenues for examining the fit of theory to practice are suggested. Promising links exist between SDT and health simulation. Opportunities and new pathways of discovery await. Introduction "Ok everyone, we're here to do a sim, this is a safe space, nothing that is said or done here will leave the room"."You've all done the work, you'll all be fine.Just go in there and I'm sure you'll be fabulous"."This is a safe place to make mistakes-no patients will be harmed".There is little doubt that people who have navigated themselves to this paper will have either said or heard these words in the context of health simulation.These words are quite comforting to say, and we genuinely want them to be true.They form part of a script that relies on adages for which we have become accustomed: simulation provides a psychologically safe space to rehearse skills, to make mistakes and to avoid patient harm.But just because these statements can be true does not mean that they always are true.This is by no means the first paper that has challenged some of the conventions, myths and practices that have been enthusiastically adopted in health simulation practice and research, and likely won't be the last.Further to critiquing the problems or the debate about the problems that exist in simulation practice, this paper seeks to explore the principles and practice of simulation through a different lens.The lens we will look through may generate deeper consideration of why some approaches to working with participants in simulations and simulation programmes work better than others (consider in situ vs non-in situ simulations, variation in approach of debriefers, longitudinal debriefing, perceptions of psychological safety) and how we can improve and optimise our simulation learning environments. Just as the practice of designing and delivering simulation has been evolving to meet the needs of learners and institutions, so too has the research in this field.As with all other previously emergent fields of research, there is an imperative to (a) reflect on how the quality and direction of research endeavours can be strengthened and (b) act on recommendations that will allow the field to fulfil its potential.In their 2022 editorial, Walter Eppich and Gabriel Reedy note the general change of direction in health simulation research as moving away from aims that seek to justify simulation activities to those which seek to clarify how, and in which circumstances, simulation is effective [2].Their call to action is clear and framed by three guiding principles: 1. Theoretical frameworks and concepts must be better integrated into all phases of the design and execution of research projects and programmes of research.2. Varied methodologies and methodological lenses are required to progress the field. Innovative techniques for data collection and analysis should be explored and embraced. We have been challenged, as a simulation research community, to more deeply consider theory, methodology and methods as they relate to health simulation and health simulation research [2]. The theory that is the focus of this paper is selfdetermination theory (SDT).SDT focuses on human motivation and behaviours and has informed the growth of various fields [3,4].Whilst referred to in some health simulation literature [5][6][7], it has yet to be comprehensively applied, explored or tested in this field.This paper forms a foundation for discussing some promising lines of research enquiry that could help advance the field of health simulation.It offers an overview of a theoretical framework that appears to be both relevant to health simulation and that offers a variety of methodologies to explore simulation for new insights and areas for practice improvement. The theory Self-determination theory (SDT) is described as a macro theory (in this instance, an overarching theory) of human motivation [1].First proposed in the 1970s by Richard Ryan and Edward Deci, it has been broadly applied, explored and tested in numerous settings and populations, including primary and secondary schools [8,9], universities [10][11][12], workplaces and various health contexts [13]. The origins of SDT lie in the exploration of human motivation and the conditions and environments that impact human behaviours [14].Over the past four decades, it has slowly and organically developed into a broader theory, which now includes six related "minitheories" [14].The mini-theories of SDT include the following: cognitive evaluation theory, organismic integration theory, Causality Orientations Theory, Basic Needs Theory, Goal Content Theory and the Relationships Motivation Theory [15][16][17] (see Table 1).These interrelated theories offer numerous opportunities for considering foundational principles that may already, and perhaps ought to, underpin the design and delivery of health simulation activities and programmes.One of the early propositions in SDT was that the motivations that lead to behaviours (or inaction) could be separated into categories: those that are self-determined ("i.e.governed by the process of choice and experienced as emanating from the self ") and those that are initiated or determined by factors external to the self ("i.e.governed by the process of compliance and experienced as compelled by some interpersonal or intrapsychic force") [19].These have come to be known as intrinsic and extrinsic motivational forces. In SDT, the identified types of motivation are often visualised on a spectrum.At the upper end of this spectrum lies "intrinsic motivation" -behaviours that emanate from a sense of self and that are inherently satisfying [15].This is followed by four states of "extrinsic motivation": external regulation, introjection, identification and integration [1].Finally, at the lower end of the spectrum lies "amotivation" -a state where an individual lacks any intention to act [1,13]. Intrinsic motivation is explored in the first mini-theory of SDT: cognitive evaluation theory -a theory that is concerned with the factors that either undermine or support intrinsic motivation [15].Intrinsic motivation is described in this theory as a type of self-determined motivation.It is a construct that "describes [a] natural inclination toward assimilation, mastery, spontaneous interest, and exploration" [20].It has long been acknowledged as developing in humans from birth and, operationally, describes behaviour adopted for its inherently satisfying results [15].Notably, enjoyment that stems from intrinsic motivation is likely to be conducive to personal growth and eudaimonia -a state of living a "complete" life or "living life well" [21].Also to note, experiments undertaken in the pursuit of exploring this theory have found that some types of rewards can decrease people's intrinsic motivation [3]. The aim of all behaviour is most unlikely to be all intrinsically motivated -there are countless internal and external pressures that prompt various behaviours [22].Whilst intrinsically motivated behaviours are a significant type of self-determined behaviours, they are not the only form of self-determined behaviours.There are numerous extrinsically motivated behaviours that are also said to be self-determined.These are further explored in organismic integration theory, which posits that distinct characteristics of various extrinsically motivated behaviours can be identified [18]. Extrinsically motivated behaviours are those that are undertaken to obtain an external outcome (for example wealth, notoriety or material goods) [23].In SDT, the study of extrinsic motivation has been much more concerned with the quality of motivation, as opposed to the quantity of motivation [23,24].This is in contrast to other theories of motivation, particularly as they relate to employment, which often focus on the quantity of motivation that individuals possess in relation to particular tasks [23].Opportunities to consider this spectrum in health simulation are explored below and culminate in some hypothesised example statements in Table 2. Ordered from the least to the most internalised of the four subcategories of extrinsic motivation are as follows: external regulation, introjection, identification and integration (see descriptions in Table 2).These lie on a continuum of self-determination and, when exercised, produce demonstrably different outcomes and associated outputs.External regulation and introjected motivation are forms of "non-self-determined" motivation [15,23].Behaviours that fall into the category of extrinsic motivation are regulated by an external pressure or an external reward, such as financial remuneration or threat of punishment [13,25].Identification and integration are considered to be autonomous and self-regulated forms of extrinsic motivation [15,25]. Identifying that there are qualitative differences that underlie peoples' extrinsically motivated behaviours is important [23] (consider your experiences of working with simulation participants who love simulation, versus those who attend because it is a requirement of their job or education).Evaluating these differences holds value for understanding human behaviour, and how our social environment and work systems can be designed to optimise human potential [1]. Amotivation sits at the opposite end of the spectrum from intrinsic motivation.When amotivation is experienced in a workplace, for example, an employee may value an activity or behaviour so little that no effort is exerted to complete or realise the potential of that behaviour [13] (for a health-related example, consider the issues of poor adherence to appropriate hand hygiene). To illustrate the different constructs of motivation, example statements relating to the qualities of motivation to participating in physical exercise, as presented by Ng et al. [13], are provided in Table 2. Alongside, these sit some potential statements relating to health professionals, and the quality of motivation to gain consent from patients, and examples relating to participating in simulation activities. SDT is concerned not only with the quality of motivation but also the types of environments and contexts that effect motivation and the changes in motivation people may experience.organismic integration theory asserts that people are inherently driven towards learning, mastery and connection [16].This inherent quality, however, is not achieved without conditions that are supportive.These conditions are believed to include three fundamental psychological needs: autonomy, competence and relatedness [24].Indeed, the presence or absence of conditions that support these basic needs may "sustain [or] diminish the "innate propensity" of humans to act from an intrinsic motivation" [20].The examination of intrinsic motivation has therefore been one that has evaluated these conditions and forms the basis for basic psychological needs theory.Three basic psychological needs are explored in basic psychological needs theory: autonomy, competence and relatedness.In the context of SDT, autonomy refers to the "the perception of being the origin of one's own behavior and experiencing volition in action" [13], and is not defined by autonomy's other definitions which relate to independence and separation [17]. Autonomy has been explored at length in terms of both the individual experience and the contexts that either support or inhibit this psychological need [16].Autonomy supportive environments include those that encourage and allow individuals to experience their behaviour as volitional.Features of autonomy supportive environments include nonjudgemental attitudes, the provision of rationales for suggestions or decisions and the facilitation of self-regulation [26]. Competence is described as "the feeling of being effective in producing desired outcomes and exercising one's capacities" [13].It is concerned with mastery [16].It has been identified that "the need for competence is best satisfied within well-structured environments that afford optimal challenges, positive feedback and opportunities for growth" [16].It is not hard to draw links between this statement and the practice of health simulation.We can hypothesise that the high levels of satisfaction students report when participating in simulation event(s) are inextricably linked to the efforts made to create a structured environment and to provide feedback through debriefing that is both positive and directive for growth. Relatedness is defined in SDT as the "feeling of being respected, understood, and cared for by others" [13].Relationship motivation theory is the newest of the six mini-theories and focuses on the impact of basic psychological needs on interpersonal relationships.A central idea in Relationship Motivational Theory is mutuality of autonomy [3].In other words, the equal creation of autonomy-supportive environments from each party.This idea has interesting implications for the relationship that develops between facilitators and participants and indeed between participants themselves. How has SDT already been applied to simulation? A handful of studies have been published that investigate links between elements of SDT and the design of health simulation scenarios, activities and programmes.Table 3 provides a brief overview of the studies which have identified SDT itself or elements of SDT in their study.They include three prospective, quantitative studies [25,27,28]; two mixed-methods studies [29,30] and one qualitative study [5].SDT was also mentioned in a discussion paper regarding mastery learning, but not extensively explored [6]. As can be seen in Table 3, the aims and hypotheses being explored are somewhat varied, but all have a focus on motivation.For example, in the studies conducted by Diaz-Agea, Pujalte-Jesus [5] and Escher and Rystedt [30], motivation to participate in the simulations themselves is explored in cohorts of nursing students and health professionals respectively.In the Henry and Vesel [29] example, motivation was explored in relation to participants' feedback-seeking behaviours.Autonomy is the other SDT element that is explored, with studies working to determine its relationship with different types of motivation [25,28]. Two questionnaires that have been developed in the exploration of SDT were used in the studies included in Table 3: The inventory of intrinsic motivation (IMI) scale and the Situational Motivation Scale (SIMS).The IMI derives from one of the mini-theories of SDT: cognitive evaluation theory [31].There are numerous versions of this questionnaire which have been adapted for different contexts (e.g.sport, physical education) and experiments which have tested cognitive evaluation theory [28,31].The SIMS is a validated tool that invites participants to respond to prompts linked to four types of motivation: intrinsic motivation, identified regulation, external regulation and amotivation [32]. Beyond the discrete cases of SDT being investigated in health simulation in the examples provided, there is no current programme of research that is exploring this theory in relation to health simulation.A broader and deeper exploration of the theory and its relevance to health simulation is warranted.It is warranted because of the following: (1) there is a necessity for our field to better understand theoretical foundations that may facilitate progress, and appropriate reform, in the design and delivery of simulation, (2) there is a growing demand for theory to underpin our own professional development as simulationists [2,33], (3) there is potential for this deeper understanding of practice to enhance outcomes for learners and patients and (4) there are existing parallels between the language used in the study of SDT and the practice of health simulation. Current and future implications We have opportunities to more deeply consider the fundamental principles that underpin health simulation and to determine what elements of SDT could lead to Brief overview of findings Diaz-Agea, Pujalte-Jesus [5] The aim of this study was to explore the views and perspectives of students involved in simulation-based learning related to their process of motivation.Also, to identify the motivational elements they perceived, as well as the aspects that could reduce their motivation in the simulation sessions Qualitative study Focus-group discussions 101 nursing students Various themes and subthemes were identified and groups into factors that participants identified as motivating and demotivating and suggest that these could be leveraged to motivate students to participate in their nursing education to a greater extent Escher, Rystedt [30] The aims of this study were twofold: First, to examine the responses of the professional groups involved in the training, particularly issues related to the development of self-efficacy and situational motivation.Second, Higher levels of autonomous situational motivation did not correlate with better performance in nontechnical skills during the SBET Thoma, Hayden [28] The purpose of this study was to determine whether first-year medical students reported greater intrinsic motivation when participating in higher autonomy simulation sessions as compared with lower autonomy sessions Non-randomised crossover trial Adapted IMI survey 22 first-year medical students Extracurricular sessions increased participants' perceived autonomy, but they were highly intrinsically motivated in both settings improvements in the design and delivery of health simulation activities, programmes and research.The conceptual argument for this is founded in some assumptions.Namely, that SDT (1) is a relevant theory to consider when exploring how and why simulation is an effective modality for technical and behavioural skill development in the health simulation context, (2) offers new avenues for exploring how simulations can be designed with enhanced and predictable participant benefit, (3) may be relevant in explaining why people who deliver simulation activities (including simulated patients, embedded participants and simulation coordinators) value participating in this type of activity and (4) has the potential to explain why simulation is a successful modality for learning, skill development, team building and for improving system functionality and safety. At face value, it does appear that the principles that have guided health simulation activities can be firmly linked to foundational components of SDT.If we consider the often adopted "basic assumption", through the lens of SDT, we can see alignment between language and theory: ("We believe that everyone participating in this simulation is intelligent, capable [competence], cares about doing their best [autonomy, competence, motivation] and wants to improve [motivation]") [34]. In moving from an intuited to an explicit practice of psychological safety that is founded in SDT, we can apply evidence from the broader health professions and clinical education literature.This literature strongly suggests psychological safety can be provided and optimised when an "autonomy supportive" environment is created and sustained [11,35].The benefits of autonomy supportive environments include the increased intrinsic motivation of learners (i.e.learners experience deep satisfaction in the learning process and are intrinsically motivated to continue that learning process).Examples of how the features of autonomy supportive environments may already, or could, be applied to health simulation are outlined in Table 4. In efforts to understand the foundations of good quality health simulation, and to further explore the validity of the various components of SDT in this field, research projects can address quite a broad array of questions.It would be relevant to examine how SDT could further inform simulation design and delivery (as described above), simulation participants' quality of motivation to transfer technical and behavioural skills to the clinical environment, how principles and evidence from SDT could be incorporated into faculty development and how performance can be optimised. As an example, we can consider practitioners' quality of motivation to gain patients' consent.Gaining informed consent is a fundamental part of working as a health professional [38].We know that patients are not optimally providing informed consent for procedures [39], nor for participating in medical research (e.g. pharmaceutical trials) [40].There are acknowledged issues related to patients' level of health literacy and clinicians' overconfidence that patients have understood what they have explained, and there is an opportunity to examine the role of education and performance enhancement in addressing these issues [39].SDT could be used to examine health professionals' quality of motivation for gaining informed consent.Relevant, preliminary research questions include the following: "What is the quality of motivation that health professions students and health professionals demonstrate in relation to the technical and behavioural skills of gaining informed consent from patients" and "What influences health professionals' quality of motivation for gaining consent?". When considering how to apply this knowledge into the design of a simulation, we can ask questions about the impact of different approaches for learning about the consent process."Is externally regulated motivation to gain informed consent related to learning about this process from a predominantly legal perspective?""Does learning/reflecting on these skills from a bioethics perspective lead to identified or integrated motivation when gaining consent in a simulated scenario?""What are the intended and un-intended consequences for participants who have come to simulations from these different teaching perspectives?"Given previous work with SDT, we might hypothesise that learners will be impacted by these external factors, and their subsequent behaviours may be moderated by the lens of teaching or debriefing that is adopted.This same principle would apply to an array of technical and behavioural skills -hand hygiene, breaking bad news, engaging in low dose and high-frequency simulation for the maintenance of various skills. Pathways exist for investigating the relevance of SDT to health simulation and for testing SDT theory in simulated contexts.These can be shaped to further extend the work of others who have investigated SDT and to provide evidence to underpin the various techniques and modalities of health simulation. Ultimately, we should be aiming to generate and then to use the best available evidence to support simulation practice, support the refinement of learning outcomes and support faculty development efforts.SDT is a theory that has been built and tested slowly, strategically and with care not to oversimplify concepts or to foster reductionism.What we can work towards is not just isolated studies that may lead to another set of education myths [41,42].We have the opportunity to continue in the SDT tradition of systematically testing ideas and theory to determine if and what principles will facilitate a maturing of health simulation for teaching, training, systems testing, performance evaluation and professional development. Conclusion SDT is a theory that has been explored in many fields, and whilst elements have been explored in simulation, this exploration is in its infancy.Proposed in this paper is a rationale for conducting research that examines the relevance of the theory to health simulation and explore how health simulation may benefit from SDT research from other fields. Why might we do this?We come back to the introduction of this paper where we consider the statements and philosophy that we want to be true in the field of health simulation.There is a pathway for testing our underlying assumptions and to enhancing our practice through detailed, structured and theoretically sound methods.In testing potential associations between SDT and simulation, we may be better informed about when statements we make are more likely to be true ("this is a psychologically safe environment") and when they really may not be.In examining health simulation through the lens of SDT, we have opportunities to capture new insights into why simulation can be effective in enhancing performance and to further generate an evidence base for best practice in this field. [the authors] wanted to explore participants' perceptions of the design features important to the training and the opportunities for and barriers to transferring the lessons learned in [simulation-based team training] to teamwork in the operating room Mixed methods Self-efficacy questionnaire SIMS survey Focus-group discussions 71 health professionals who work in operating theatres The team training provided was associated with increased self-reported confidence and intrinsic motivation in the operating room team members who participated.Barriers to transferring lessons learned into clinical practice largely related to organisational/ system level factors Henry, Vesel [29] The overall goal of [the] study was to gain information on how educational environments can promote feedback seeing among learners Mixed methods IMI survey + participant interviews 34 medical residents completed IMI survey 10 interviews The relationship between motivation and feedback is complex.The IMI could not predict this relationship, and factors other than motivation were linked to feedback-seeking behaviours Moll-Khosrawi, Cronje [25] Hypothesis testing: [Authors] hypothesised that [simulation-based medical education] and bedside teaching enhance autonomous motivation and decrease controlled motivation Prospective interventional cohort study design SIMS 145 third-year medical students sampled, with varied response rate at different time points In participants who had bedside teaching, there was found to be a decrease in external (controlled) motivation and identified (autonomous) motivation.The simulationbased trainings did not change students' level of motivation Schulte-Uentrop, Cronje [27] [Authors] investigated the correlation of students' motivation and their performance of non-technical skills during simulation-based emergency training (SBET) Prospective cross-sectional cohort study SIMS Anaesthesiology students' non-technical skills 422 medical students (years 1-4) Table 1 Overview of the six mini-theories of self-determination theory (SDT) Table 2 Examples of motivational construct statements Table 3 Examples of SDT in current simulation and medical education literature Abbreviations: IMI inventory of intrinsic motivation, SBET simulation-based emergency training, SIMS situational motivation
v3-fos-license
2024-01-07T16:13:19.431Z
2024-01-05T00:00:00.000
266794599
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.thelancet.com/article/S2589537023005850/pdf", "pdf_hash": "a3451d561af127dfc915fd10d13ceb2b7d33acec", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1408", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "5c0005439d10b146465cc576685afa03d0608873", "year": 2024 }
pes2o/s2orc
Dysanapsis is differentially related to lung function trajectories with distinct structural and functional patterns in COPD and variable risk for adverse outcomes Summary Background Abnormal lung function trajectories are associated with increased risk of chronic obstructive pulmonary disease (COPD) and premature mortality; several risk factors for following these trajectories have been identified. Airway under-sizing dysanapsis (small airway lumens relative to lung size), is associated with an increased risk for COPD. The relationship between dysanapsis and lung function trajectories at risk for adverse outcomes of COPD is largely unexplored. We test the hypothesis that dysanapsis differentially affects distinct lung function trajectories associated with adverse outcomes of COPD. Methods To identify lung function trajectories, we applied Bayesian trajectory analysis to longitudinal FEV1 and FVC Z-scores in the COPDGene Study, an ongoing longitudinal study that collected baseline data from 2007 to 2012. To ensure clinical relevance, we selected trajectories based on risk stratification for all-cause mortality and prospective exacerbations of COPD (ECOPD). Dysanapsis was measured in baseline COPDGene CT scans as the airway lumen-to-lung volume (a/l) ratio. We compared a/l ratios between trajectories and evaluated their association with trajectory assignment, controlling for previously identified risk factors. We also assigned COPDGene participants for whom only baseline data is available to their most likely trajectory and repeated our analysis to further evaluate the relationship between trajectory assignment and a/l ratio measures. Findings We identified seven trajectories: supranormal, reference, and five trajectories at increased risk for mortality and exacerbations. Three at-risk trajectories are characterized by varying degrees of concomitant FEV1 and FVC impairments and exhibit airway predominant COPD patterns as assessed by quantitative CT imaging. These trajectories have lower a/l ratio values and increased risk for mortality and ECOPD compared to the reference trajectory. Two at-risk trajectories are characterized by disparate levels of FEV1 and FVC impairment and exhibit mixed airway and emphysema COPD patterns on quantitative CT imaging. These trajectories have markedly lower a/l ratio values compared to both the reference trajectory and airway-predominant trajectories and are at greater risk for mortality and ECOPD compared to the airway-predominant trajectories. These findings were observed among the participants with baseline-only data as well. Interpretation The degree of dysanapsis appears to portend patterns of progression leading to COPD. Assignment of individuals—including those without spirometric obstruction—to distinct trajectories is possible in a clinical setting and may influence management strategies. Strategies that combine CT-assessed dysanapsis together with spirometric measures of lung function and smoke exposure assessment are likely to further improve trajectory assignment accuracy, thereby improving early detection of those most at risk for adverse outcomes. Funding United States National Institute of Health, 10.13039/100008184COPD Foundation, and 10.13039/100005292Brigham and Women's Hospital. Introduction Recent work has identified distinct lung function trajectories throughout the life-course, and abnormal trajectories are associated with increased risk of developing chronic obstructive pulmonary disease (COPD) and with premature mortality. 1,2There is significant interest in identifying and understanding the risk factors associated with abnormal lung function trajectories.4][5] However, whether anatomical characteristics of the lung associate with lung function trajectories is less known. Dysanapsis is one such anatomical feature of the lung and is defined as a mismatch between airway lumen caliber and lung volume.Smith and colleagues showed that dysanapsis as measured on computed tomography (CT) was significantly associated with COPD among older adults, with lower airway tree caliber relative to lung size associated with greater risk. 6Forno and colleagues demonstrated that dysanapsis is associated with exacerbations in obese children with asthma. 7urthermore, Bhatt and colleagues reported that airway lumens (adjusted for lung size) were smaller in women than in men, and this conferred a greater risk for morbidity and mortality. 8Here we sought to bridge the knowledge gap between dysanapsis as an anatomical feature of the lung and its relationship with lung function trajectories linked to distinct structural and functional patterns of COPD. Study cohort The COPDGene Study is an ongoing, multicenter, longitudinal study designed to investigate the genetic and epidemiologic characteristics of COPD. 9 COPDGene enrolled 10,198 non-Hispanic white and African American ever-smokers with and without COPD and 454 never-smokers.Participants were between the ages of 45 and 80 years and ever-smokers had a minimum of 10 pack-years cigarette smoke exposure at baseline (a small number of participants less than 45 years old at baseline were also recruited and considered in our study).Demographic information including age, sex, race, height, and body mass index (BMI) was obtained with standardized questionnaires and procedures.Information about parental characteristics-smoking history and history of emphysema, COPD, chronic bronchitis, and asthma-was recorded at baseline.Baseline data collection procedures were repeated at approximately 5 years (visit 2) and 10 years (visit 3); acquisition of 10-year follow-up data is ongoing.Study data consisting of smoking history (pack-years exposure and current smoking status), post-bronchodilator spirometric measures of lung function, volumetric CT of the chest, history of gastroesophageal reflux disease (GERD), health-related quality of life as measured by the St. George's Respiratory Questionnaire (SGRQ), exercise capacity as measured by the 6-min walk test (6MWD), and dyspnea assessed with the modified Medical Research Council (mMRC) scale are recorded at Research in context Evidence before this study Previous work has established that there are various lung function trajectories throughout the life course; several of these lead to COPD, including accelerated decline and failure to achieve peak lung function in early adulthood.Other work has identified CT-assessed dysanapsis-a mismatch of airway lumen caliber and lung volume-as a risk factor for COPD.However, the relationship between dysanapsis and lung function trajectories is largely unexplored. Added value of this study We applied a Bayesian trajectory approach to longitudinal data from the COPDGene study to jointly model the coevolution of FEV1 and FVC in relation to age and cigarette smoke exposure.Application of this approach to the COPDGene data identified seven lung function trajectories, five of which are at increased risk for exacerbations and mortality.We show that moderate levels of dysanapsis are related to airway predominant trajectories, while more pronounced dysanapsis is related to mixed airway and emphysema trajectories. Implications of all the available evidence We show that airway under sizing dysanapsis (small airway lumens relative to lung size) is not only a risk factor for COPD, but that the degree of dysanapsis-given its association with trajectory assignment-appears to portend patterns of progression with distinct structural and functional patterns with variable risk for adverse outcomes.Assigning individuals -including those without obstruction-to distinct trajectories could enable earlier, more tailored preventive and management strategies.Strategies that combine CT-assessed dysanapsis together with spirometric measures of lung function and smoke exposure assessment are likely to further improve trajectory assignment accuracy, thereby improving early detection of those most at risk for adverse outcomes.each study visit.Spirometry and the 6-min walk test were performed per ATS recommendations. 10The BODE index (BMI, airflow Obstruction, Dyspnea and Exercise capacity) was computed for each visit.Postbronchodilator forced expiratory volume in 1 s (FEV 1 ), forced vital capacity (FVC), and their percent predicted values were used to define spirometric groups (COPD GOLD groups 1-4).The institutional review boards of all participating centers approved the COPDGene Study, and all participants provided written informed consent. Trajectory analysis Our trajectory analysis approach used a combination of prior knowledge, data-driven inference, and model selection based on clinical relevance (using analysis of mortality and exacerbation risk).We used the bayes_traj (https://github.com/acil-bwh/bayes_traj)software routine, which is a Bayesian version of Group Based Trajectory Modeling (GBTM) and enables incorporation of prior information into the data fitting process. 11We used GLI reference equations-which account for age, height, sex, and race-to compute FEV1 and FVC Z-scores. 12Investigators have commonly performed lung function trajectory analysis using functions of age as predictors.We additionally included smoke exposure and smoking status terms given that COPDGene is comprised of current and former smokers, and because it is generally accepted that smokers have differential response to smoke exposure.By including these terms in the set of predictors, individuals assigned to a given trajectory are expected to have similar patterns of progression over time and in response to smoke exposure and smoking status.Before performing trajectory analysis, we centered the age variable at 20 for men and 18 for women; this makes the intercept term interpretable as the Z-score value obtained at the approximate age of peak lung function.We refer to the centered age variable as "years since presumed peak lung function".We evaluated several candidate predictor sets as described in the supplement.Selected trajectory model predictors included an intercept term, years since presumed peak lung function (years since presumed peak lung function), 1 pack-years smoke exposure, and an interaction between smoking status and years since presumed peak lung function. We restricted trajectory modeling to those participants for whom two or more longitudinal time points are available, and we excluded data corresponding to pack-year smoke exposure exceeding 150 pack-years (less than 1% of the data sample).We used the bayes_traj routine to jointly model FEV1 and FVC Z-score trajectories.The rationale for simultaneously considering both FEV1 and FVC Z-scores was to more fully represent heterogeneity in spirometric progression patterns.Additionally, by jointly considering these two measures, the ability to detect smaller groups with distinct spirometric progression patterns is improved owing to the greater number of measures per participant being modeled.The Bayesian paradigm enables incorporation of prior belief in the form of probabilistic priors.Since Z-score values are, by definition, normally distributed around a mean of zero, we used zero-centered normal distributions for the FEV1 and FVC Z-score intercept coefficients.This was intended to mitigate the effects of data absence in early adulthood (due to COPDGene enrollment criteria requiring participants to be at least 45 years old). We executed the trajectory routine with several hundred random initializations, and we sorted the resulting models according to the Watanabe-Akaike information criterion (WAIC2). 13We focused on those trajectories that account for at least 2% of the data sample.To select clinically meaningful trajectories for further analysis, we evaluated models with the best WAIC2 scores in terms of exacerbation and mortality risk stratification, and we selected the model with statistically significant discrimination between adjacent trajectories in terms of mortality hazard ratios and exacerbation incident ratio ratios (See supplement for additional details). For each participant, trajectory modeling produced a probability of assignment to each of the identified trajectories.We assigned each participant to their most probable trajectory and treated this assignment as a factor variable for further analysis.We then used the derived trajectory model to assign those COPDGene participants for whom only baseline data is available (the "baseline-only" cohort) to their most probable trajectory.These participants were not included in the data sample used to train the trajectory model and enabled us to further evaluate the relationship between trajectory assignment and a/l ratio values.Figure E1 describes the data selection procedure. Outcomes We analyzed two outcomes to assess the clinical meaningfulness of trajectories: exacerbations during follow-up and all-cause mortality.Exacerbations are defined in COPDGene as a new onset of or increase in cough, phlegm, or dyspnea.An episode that requires antibiotics and/or steroids is counted as an exacerbation.Participants were asked every three to six months about exacerbation episodes through the COPDGene longitudinal follow-up program. 14Deaths were also identified through the longitudinal follow-up program and were confirmed with death certificates from the Social Security Death Index. CT analysis and dysanapsis Chest CT analysis in the COPDGene Study has been described previously. 9Briefly, quantitative analysis of inspiratory CT scans using Thirona software (Thirona LungQ, Nijmegen, The Netherlands) produced measures of emphysema and airway wall thickness.The Hounsfield unit (HU) value representing the 15th percentile of the lung region HU histogram (Perc15) was used for densitometric assessment of the lung parenchyma. 15Airway wall thickening was assessed as the square root of the wall area of a theoretical airway with an internal lumen perimeter of 10 mm (Pi10). 16Additionally, we quantified dysanapsis on baseline inspiratory CT scans using airway lumen diameters at 13 anatomic locations together with the CT-assessed total lung volume (VIDA Diagnostics, Coralville, IA, USA).Airway locations included: mainstem left and right bronchi; bronchus intermedius; lobar bronchi of the left upper lobe (LUL), left lower lobe (LLL), right upper lobe (RUL), right middle lobe (RML), and right lower lobe (RLL); and the following six segmental bronchi: apicoposterior segment of LUL, medial segment of lingula, basal posterior segment of LLL, apical segment of RUL, medial segment of RML, and posterior basal segment of RLL.The geometric mean of the 13 airway lumen diameters was divided by the cube root of CT-measured total lung volume to provide a measure of dysanapsis, referred to as the airway to lung (a/l) ratio. 6Lower a/l ratio values indicate smaller airway tree lumens relative to lung size and thus greater dysanapsis. Statistical analysis Mortality time-to-event and exacerbation count modeling was performed using the R software package (version 3.6.1). 17Participant trajectory assignment was treated as a factor variable.We performed extended Cox modeling (R's coxph routine 18,19 ) using age as our time scale and all-cause mortality as our outcome of interest.We examined scaled Schoenfeld residuals with a twosided chi-square test to assess the proportional hazards assumption for trajectory assignment.Our reduced model included current smoking status, pack-years smoke exposure, sex, and race as covariates; the full model also included BMI, MMRC, and 6MWD, which have shown to be independent predictors of mortality. 20or prospective number of total exacerbations, we used zero-inflated negative binomial mixed modeling (R's glmm.zinbroutine 21 ) with an offset variable to account for differences in observation times.The reduced model adjusted for age, current smoking status, pack-years smoke exposure, sex, and race; the full model additionally adjusted for SGRQ, gastroesophageal reflux, and number of exacerbations over the previous year. 22e assessed differences in a/l ratios between trajectories using the Mann-Whitney test (using python's scipy.stats.mannwhitneyuroutine, version 1.7.3) 23and used Bonferroni correction for multiple comparisons. Parental characteristics including parental emphysema, COPD, chronic bronchitis, and asthma, as well as whether parents were cigarette smokers, have been previously shown to associate with lung function trajectories. 24To evaluate whether the a/l ratio is an independent predictor of trajectory assignment, we considered the a/l ratio together with these parental characteristics in a multinomial logistic regression using a forward stepwise-selection strategy.The a/l ratios were scaled by the standard deviation of the reference trajectory for ease of interpretation, so that relative risk ratios can be interpreted as how many times more or less likely a trajectory assignment is to be (relative to the reference trajectory) for a unit change in standard deviation. Finally, we investigated whether dysanapsis could be considered a static anatomical feature of the lung or one that changes with age, and we also assessed whether airway wall thickening and emphysematous destruction of the parenchyma could confound the measured a/l ratio values.To do this, we estimated the per-trajectory relationships between a/l ratio, airway wall thickening (Pi10), and emphysema (Perc15) with age using ordinary least squares regression (scipy.stats.OLS routine, version 1.7.3) 23(see supplement for details). Role of the funding source The funders had no role in study design, data collection, data analysis, data interpretation, or writing of the report. Results Table 1 and Table 2 provide characteristics of the 5401 COPDGene participants on whom trajectory modeling was performed by visit and a/l tertiles, respectively.For this sample, 538 mortality events are recorded with a median follow-up time of 10.4 years (interquartile range: 9.5-11.1 years).A total of 13,795 exacerbation events are recorded (2.63 per-participant on average, interquartile range: 0-3) with a mean follow-up time of 6.5 years per participant. Bayesian trajectory analysis identified seven trajectories in COPDGene, with each trajectory accounting for at least 2% of the data sample.Per-trajectory participant characteristics at baseline are provided in Table 3, and trajectories are plotted in Fig. 1 (Figure E3 shows all trajectories in a single plot).Per-trajectory participant characteristics of the baseline-only cohort are provided in Table E6.Trajectory 1 is characteristic of supranormal individuals.Trajectory 2 exhibits FEV1 and FVC Z-score values within the normal range with modest declines in Z-score values with increasing age and smoke exposure; we treated this trajectory as the reference for subsequent analysis.The remaining trajectories fall into two broad categories based on qualitative assessment: those having varying degrees of concomitant FEV1 and FVC impairments (trajectories 3-5) and those having disparate levels of FEV1 and FVC impairment (trajectories 6 and 7).Referring to Table 3 shows that those in the first category exhibit greater airway wall thickening (higher Pi10 values) with no marked difference in emphysema (measured by Perc15) compared to the reference trajectory.On the other hand, trajectories 6 and 7 exhibit both greater airway wall thickening and emphysema levels.We therefore refer to the first category as airway predominant trajectories and the second category as mixed airway and emphysema trajectories.Trajectories 3-7 are further summarized as follows: Airway predominant trajectories: 3762 COPDGene participants comprised the baseline-only cohort.Of these, 3194 (85%) were assigned to one of the seven trajectories described above.For this subset, 1063 mortality events are recorded with a median follow-up time of 3.9 years (interquartile range: 2.1-6.1 years).A total of 3675 exacerbation events are recorded (1.2 per-participant on average) with a mean follow-up time of 3 years per participant. Table 4 shows significantly increased all-cause mortality hazard ratios for each of the at-risk trajectories compared to trajectory 2, as well as significantly increased incident rate ratios for exacerbations for each of the at-risk trajectories using trajectory group 2 as the reference group.These results were significant in the reduced model and in the full model (Tables E2-E5 provide hazard ratios and incident rate ratios with the at-risk trajectories taken as the reference).Table 4 also shows hazard ratios and incident rate ratios corresponding to those participants in the baseline-only cohort that were assigned to their most likely trajectories. In Fig. 2 we compare a/l ratio measures between the reference trajectory and at-risk trajectories.Within each trajectory category (airway predominant and mixed airway and emphysema) there is a trend toward lower a/ l ratio values with increasing risk for all-cause mortality and prospective exacerbations.Notably, while all at-risk trajectories exhibit lower a/l ratio values compared to the reference trajectory, decrements are less pronounced in the airway predominant category compared to the mixed airway and emphysema category.These results are also evident in the baseline-only cohort (Figure E5).We also considered the statistical significance of a/l ratio values between adjacent at-risk trajectories.In the training cohort, we observed a significant difference between trajectories 3 and 4 (p < 1.00e-04) and between 5 and 6 (p < 1.00e-03), but not between 4 and 5 or 6 and 7.In the baseline-only cohort, we observed a significant difference between trajectories 3 and 4 (p < 1.00e-04), between 4 and 5 (p < 0.01), and between 5 and 6 (p < 1.00e-04), but not between 6 and 7. Table 5 shows that dysanapsis is associated with increased relative risk ratios (RRR) for assignment to each of the at-risk trajectories, with the magnitude of the RRR increasing from trajectory 3 to 7. Using forward stepwise selection, we find that risk ratios corresponding to scaled a/l ratios remain significant after adjusting for parental risk factors. Discussion We analyzed longitudinal data from the COPDGene Study and identified seven distinct lung-function trajectories.A benefit of trajectory analysis is that it identifies population subgroups with similar progression patterns in response to age and external factors, such as smoke exposure.Similarity in progression patterns putatively indicates endotypes, population subgroups that respond and progress in a similar fashion due to underlying similarities in genetics, physiology, or-as suggested here-anatomic architecture.Indeed, previous research demonstrates a genetic basis for airway branch variation. 25Our analysis found that an anatomical characteristic of the lung, dysanapsis, is associated with the more at-risk lung-function trajectories, which are at increased risk for all-cause mortality and prospective pulmonary exacerbations.Interestingly, other anatomical features-airway fractal dimension and total airway count-have been associated with lung function independent of dysanapsis in healthy participants. 26We posit that these, too, may be linked to various lung function trajectories.Nearly all participants assigned to trajectories 6 and 7 had COPD at baseline and-as noted above-also had more emphysema (lower Perc15 values) and higher Pi10 values compared to trajectory 2. Emphysema is typically thought to accompany a rapid decline in FEV1; this is seen in trajectory 6 but not in trajectory 7. Indeed, in trajectory 7 decline in FEV1 appears to halt in older age.Whether this trend reflects response to intervention, survival bias, or some other factor requires further investigation.The rapid decline in FVC in trajectories 6 and 7 might reflect an increasing degree of air trapping, which is supported by the high CT lung volumes measured in those assigned to these trajectories (Table 3).Interestingly, despite less pack-years smoke exposure compared to trajectory 6, trajectory 7 has more clinical (MMRC, 6MWD, BODE) and radiologic (Perc15, Pi10) impairment, suggesting these individuals are comparatively more susceptible to smoke exposure (Table 3).Despite these notable differences, we did not observe a statistically significant difference in a/l ratios between these two trajectories.Nevertheless, it is interesting to put observations of these trajectories in the context of Smith et al. who observed that participants with established COPD and smaller a/l ratio values had comparable lung function decline as community-based samples, while those with established COPD and larger a/l ratio values had faster lung function decline. 6hey suggest that the former might correspond to those with low peak lung function in early adulthood followed by normal decline while the latter might correspond to those with persistent accelerated decline, two distinct patterns leading to COPD as described by Lange et al. a a/l ratio measurements available on 5206 of the 5401 participants included in our study.Data are presented as mean ± SD, number (percent), or number/total (percent). Table 2: Baseline characteristics of COPDGene participants by a/l ratio tertile.a Given the similarity in CT characteristics and the pattern of rapid FVC decline shared by trajectories 6 and 7, we raise the possibility that those with the most pronounced dysanapsis correspond to rapid lung function decline patterns.In the case of trajectory 6, rapid decline is evident during COPDGene's period of observation.In the case of trajectory 7, which has patterns of FEV1 decline comparable to the reference trajectory, the period of rapid decline may have occurred earlier in life and may have also been accompanied with failure to achieve peak lung function.On the other hand, the airway predominant trajectories (3, 4, and 5) all have patterns of FEV1 and FVC change comparable the reference trajectory, albeit with more consistently lower Z-score values.These trajectories have less pronounced dysanapsis, and we posit that they correspond to low peak-lung function with more normal rates of decline throughout adulthood.Interestingly, the airwaypredominant trajectories have a proportionally greater representation of African Americans compared to the supranormal, reference, and mixed airway and emphysema trajectories.Smith et al. identified CT-assessed dysanapsis as a risk factor for COPD. 6One possible explanation for the trend in a/l ratio values across trajectories observed in Fig. 2 is the differences in COPD prevalence.However, we observe a similar relationship between a/l ratio values and trajectories when we consider only those participants with COPD in each trajectory (Figure E6).Our work shows that dysanapsis is not only a risk-factor for COPD, but the degree of dysanapsis appears to portend distinct trajectories leading to COPD.This is particularly notable for the airway predominant trajectories, which exhibit higher FEV1/FVC ratios in the pre-COPD state.CT-assessed dysanapsis has been shown to be associated with decreased FEV1/FVC in healthy never-smokers during early adulthood. 27Our work suggests that individuals with airway under-sizing dysanapsis-independent of decreased FEV1/FVC-may nonetheless be at increased risk for COPD via airway predominant trajectories. Our work has important clinical implications.First, we demonstrate in the baseline-only cohort that our trajectory model can assign individuals to trajectories using single time point measures of spirometry and assessment of smoking history and status; with additional longitudinal measurements, the accuracy of this assignment is expected to improve.These assignments can be used for risk assessment and management decisions (e.g. by differentiating between airway predominant and mixed airway and emphysema patterns, and by assigning risk for adverse outcomes).Second, our study extends previous research by highlighting the significance of subtle airway under-sizing dysanapsis in relation to airway predominant trajectories.A venue for applying these findings is within CT lung cancer screening cohorts to identify individuals at risk for COPD via distinct trajectories.Strategies that combine CT-assessed dysanapsis and spirometry could further improve early trajectory assignment and enable intervention prior to costly and burdensome outcomes.Furthermore, by assigning individuals to distinct at-risk Data are presented as mean ± SD or number (percent).Trajectories 1 and 2 are supranormal and reference, respectively.Trajectories 3-5 (bold) are characterized by airway predominant abnormality leading to COPD; trajectories 6 and 7 (italic) are characterized by mixed airway and parenchymal abnormality.A strength of our study is the use of the COPDGene study, a large, longitudinal, well-characterized cohort of smokers that is enriched for COPD.This enabled us to identify several distinct lung-function progression patterns leading to COPD with a granularity difficult to achieve with population-based cohorts.Indeed, even with enrichment for COPD, trajectories 5, 6, and 7, account for 4.9%, 2.5%, and 2.7% of the data sample, respectively. Our study has certain limitations.First, we assume that dysanapsis is a static feature of an individual's lung architecture; however, lack of CT measures at earlier ages is a limitation, and further investigation in younger populations is needed.Nevertheless, our crosssectional analysis provides no compelling evidence that the CT measure of dysanapsis is associated with age (see supplement, Table E7 and Figures E7-E12).Second, we assessed dysanapsis in a population of smokers with and without COPD.It is reasonable to suspect that the a/l ratio measure could be susceptible to certain disease processes: airway lumen narrowing due to wall thickening and lung hyperinflation due to emphysema.However, our cross-sectional analysis considered both these measures, and we found no compelling evidence to support the notion that these processes affect our findings (see supplement, Table E7).This is in line with the analysis by Smith et al. who concluded that airway to lung ratio measures were unlikely to be affected by emphysema-associated loss of airway tethering, airway remodeling, or lung hyperinflation given that their findings were similar after adjusting for emphysema severity. 6Third, COPDGene enrollment criteria required participants to be at least 45 years old at baseline.Hence, we did not have data in early adulthood that could more definitively link trajectories to low-peak lung function and/or early rapid decline that has been observed previously.We mitigated the effect of this data limitation by incorporating an assumption about the peak lung function distribution.Nevertheless, further study of the connection between dysanapsis and lung function trajectories in early life is warranted.This holds particularly true for trajectories 6 and 7, where nearly all members in COPDGene were diagnosed with COPD at baseline.Improving our understanding of the available longitudinal observations per participant.The left-most panel in each row shows FEV1/FVC Z-scores vs. FEV1 Z-scores and indicates the empirically observed levels of obstruction and FEV1 impairment.Trajectories 3-5 (light blue border) are characterized by airway predominant abnormality leading to COPD; trajectories 6 and 7 (light red border) are characterized by mixed airway and emphysema abnormality.Note: Trajectory 1 (supra-normal) is not shown.Reduced models (left): extended Cox models adjusted for pack-years smoke exposure, current smoking status, sex, and race; zero-inflated negative binomial mixed models adjusted for age, pack-years smoke exposure, current smoking status, sex, and race.Full models (right): extended Cox models adjusted for pack-years smoke exposure, current smoking status, BMI, MMRC, 6MWD, sex, and race; zero-inflated negative binomial mixed models adjusted for age, sex, race, pack-years smoke exposure, current smoking status, SGRQ, GERD, and number of exacerbations in the previous year.Trajectory 2 is used as the reference in all models.Trajectories 3-5 (bold) are characterized by airway predominant abnormality leading to COPD; trajectories 6 and 7 (italic) are characterized by mixed airway and parenchymal abnormality. Table 4: Hazard ratios for all-cause mortality and incident rate ratios for the total number of exacerbations during follow-up by trajectory. pre-COPD characteristics of these trajectories would significantly improve the ability to assign individuals to them early and thus their clinical utility.Fourth, the baseline-only cohort has fundamental differences compared to the cohort of participants on which the trajectory model was trained (e.g., in terms of overall mortality and exacerbation rates).As such, it is an imperfect cohort on which to validate our findings. Nonetheless, we observe similar patterns in this group of participants, including exacerbation incident ratios, mortality hazard ratios, a/l ratio trends, and CT characteristics (Perc15 and Pi10) with respect to the reference trajectory.Fifth, our analysis was performed in a single cohort; although COPDGene is large and wellcharacterized, replication in other cohorts is needed to confirm the trajectories we identified as well as their associations to all-cause mortality, exacerbations, and a/ l ratios.Last, we used race-adjusted equations to compute FEV1 and FVC Z-scores.However, raceadjusted equations are currently receiving intense scrutiny as their shortcomings are coming to light.Very recent recommendations call for race-neutral or multiethnic equations to be used until more research can be conducted. 28,29Interestingly, Regan et al. found that using non-Hispanic white reference equations tended to reclassify African Americans in the COPDGene study from the GOLD 0 category into the PRISm category (those with preserved FEV1/FVC but with impaired spirometry as measured by FEV1 percent predicted). 28We previously noted a greater proportion of African Americans in the airway-predominant trajectories (3-5); given Regan et al.'s analysis, these proportions may be an underestimate.The factors contributing to proportionally greater representation of African Americans in airway-predominant vs. mixed airway and emphysema trajectories is a fascinating area for further study.Analysis dependent variable is trajectory assignment; independent variables include an intercept term and a/l ratio values scaled by 0.0039 (the a/l standard deviation of the reference trajectory 2).RRRs correspond to a 1-standard deviation decrease in a/l ratio. Table 5: Multinomial logistic regression analysis for the association between dysanapsis and trajectory assignment. Our findings help close the knowledge gap between the phenomenon of dysanapsis and spirometric trajectories related to COPD, showing that dysanapsis does not uniformly differ across at-risk trajectories.Instead, we show the degree of dysanapsis appears to portend different spirometric patterns of progression with distinct structural patterns in COPD.Whether or not dysanapsis is a causal factor for trajectory assignment requires further analysis.However, regardless of causality, we suggest that strategies that include CT-assessed dysanapsis together with spirometric measures of lung function and smoke exposure assessment are likely to further improve trajectory assignment accuracy, thereby improving early detection of those most at risk for adverse outcomes. Contributors Authors J.R., and A.D. designed the study and outlined the contents of the manuscript.J.R. was responsible for the practical conduct of the study, including planning, data coordination, data modeling, data analysis, and manuscript preparation under the supervision of A.D. A.D. accessed and verified the data and performed data analysis.J.P.C. contributed data measurements required for dysanapsis computations.All authors contributed to data interpretation and manuscript revision prior to its submission, and all authors had the final responsibility to submit for publication. Data sharing statement Immediately after publication, per-subject trajectory assignments will be provided to the COPDGene data coordinating center, which should be the point of contact for researchers interested in this and other COPDGene data.The trajectory model described in our study as well as detailed provenance information will be provided without restriction; requests should be submitted to the corresponding author. Declaration of interests Dr. Ross reports grants from National Heart Lung and Blood Institute, during the conduct of the study.Dr. San José Estepar reports grants from NHLBI, during the conduct of the study; other from Lung Biotechnology, from Insmed, grants from Boehringer Ingelheim, outside the submitted work; and co-founder and stock holder of Quantitative Imaging Solutions, an imaging analytics company in the lung cancer space.Dr. Ash reports grants from NHLBI, during the conduct of the study; other from Quantitative Imaging Solutions, other from Verona Pharmaceuticals, other from Vertex Pharmaceuticals, other from Triangulate Knowledge, other from Boehringer Ingelheim, outside the submitted work.Dr. Pistenmaa reports grants from NIH/ NHLBI, during the conduct of the study.Dr. Han reports grants from NIH NHLBI, during the conduct of the study; grants from NIH, personal fees from Sanofi, personal fees from Novartis, personal fees from Nuvaira, personal fees from Sunovion, personal fees from Gala Therapeutics, grants from COPD Foundation, personal fees from AstraZeneca, grants from American Lung Association, personal fees from Boehringer Ingelheim, personal fees from Biodesix, personal fees from GlaxoSmithKline, personal fees from Pulmonx, personal fees from Teva, personal fees from Verona, personal fees from Merck, personal fees from Mylan, personal fees from DevPro, personal fees from Aerogen, personal fees from Polarian, personal fees from United Therapeutics, personal fees from Regeneron, personal fees from Altesa BioPharma, personal fees from Amgen, personal fees from Roche, personal fees from Cipla, personal fees from Chiesi, personal fees from Medscape, personal fees from Integrity, personal fees from NACE, personal fees from Medwiz, outside the submitted work.Novartis, Medtronic (participation on data safety monitoring board/advisory board)-funds paid to institution.Leadership/fiduciary role on the following: COPD Foundation Board, COPD Foundation Scientific Advisory Committee, ALA advisory committee, American Thoracic Society journal editor, ALA volunteer spokesperson, GOLD scientific committee, Emerson School Board (Ann Arbor, MI).Stock or stock options: Meissa Vaccines, Altesa BioPharma.Writing support: GSK, Boehringer Ingelheim, AstraZeneca, Novartis.Royalties from Uptodate, Norton Publishing, and Penguin Random House.Dr. Bhatt reports grants and personal fees from Sanofi, grants and personal fees from Regeneron, personal fees from Boehringer Ingelheim, personal fees from GSK, outside the submitted work.Dr. Bodduluri has nothing to disclose.Dr. Sparrow has nothing to disclose.Dr. Charbonnier reports personal fees and other from Thirona, outside the submitted work.Dr. Washko reports grants from NHLBI, grants from Boehringer Ingelheim, grants from DoD, other from Vertex Pharmaceuticals, other from Pieris Therapeutics, other from Intellia Therapeutics, other from Sanofi, outside the submitted work; and Dr. Washko is a co-founder and equity share holder in Quantitative Imaging Solutions, a company that provides consulting services for image and data analytics.Dr. Washko's spouse works for Biogen.Dr. Diaz reports grants from National Heart Lung and Blood Institute, during the conduct of the study; personal fees from Boehringer Ingelheim, outside the submitted work; in addition, Dr. Diaz has a patent "Methods and Compositions Relating to Airway Dysfunction" pending. Fig. 1 : Fig. 1: Spirometry trajectory plots in COPDGene.For each row, we show a comparison between trajectory 2 (reference trajectory in gray) and an at-risk trajectory (in red).Solid lines in the middle and right panels represent predicted values for each trajectory.The scatter plots include all Fig. 2 : Fig. 2: Boxplot showing a/l ratios of reference trajectory 2 (gray) and trajectories at increased risk of all-cause mortality and exacerbations (trajectories 3-7 in red).Trajectories 3-5 (light blue border) are characterized by airway predominant abnormality leading to COPD; trajectories 6 and 7 (light red border) are characterized by mixed airway and emphysema abnormality.Indicated above are p-values corresponding to pairwise statistical comparisons between trajectory 2 and each at-risk trajectory (using Mann-Whitney test and Bonferroni correction for multiple comparisons): *p < 0.05, ****p < 0.0001. Table 1 : Characteristics of COPDGene participants included in the study by visit. 2 Table 3 : Baseline characteristics of COPDGene participants by lung function trajectory.
v3-fos-license